Test Report: Docker_Linux_crio_arm64 19443

                    
                      8b84af123e21bffd183d137e5ca9151109c81e73:2024-08-15:35789
                    
                

Test fail (3/328)

Order failed test Duration
34 TestAddons/parallel/Ingress 151.09
36 TestAddons/parallel/MetricsServer 348.41
174 TestMultiControlPlane/serial/RestartCluster 129.04
x
+
TestAddons/parallel/Ingress (151.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-177998 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-177998 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-177998 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b515be63-4242-4831-bd26-bc1663fca405] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b515be63-4242-4831-bd26-bc1663fca405] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.005391449s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-177998 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-177998 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.586383575s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-177998 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-177998 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-177998 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-177998 addons disable ingress-dns --alsologtostderr -v=1: (1.012498624s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-177998 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-177998 addons disable ingress --alsologtostderr -v=1: (7.79814773s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-177998
helpers_test.go:235: (dbg) docker inspect addons-177998:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f371aab230120518493246b9f37f215fab3fc93241af53afb4e52cbc7c3db99c",
	        "Created": "2024-08-15T00:39:31.885620063Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1405563,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-15T00:39:32.048929554Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2b339a1cac4376103734d3066f7ccdf0ac7377a2f8f8d5eb9e81c29f3abcec50",
	        "ResolvConfPath": "/var/lib/docker/containers/f371aab230120518493246b9f37f215fab3fc93241af53afb4e52cbc7c3db99c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f371aab230120518493246b9f37f215fab3fc93241af53afb4e52cbc7c3db99c/hostname",
	        "HostsPath": "/var/lib/docker/containers/f371aab230120518493246b9f37f215fab3fc93241af53afb4e52cbc7c3db99c/hosts",
	        "LogPath": "/var/lib/docker/containers/f371aab230120518493246b9f37f215fab3fc93241af53afb4e52cbc7c3db99c/f371aab230120518493246b9f37f215fab3fc93241af53afb4e52cbc7c3db99c-json.log",
	        "Name": "/addons-177998",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-177998:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-177998",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5c37f7a13c271ecb6e5a866ea04cdedad952da520e99e0cba0cbc34549f2fd75-init/diff:/var/lib/docker/overlay2/433fc574d59582b9724e66836c411c49856e3ca47c5bf1f4fddf41d4347d66bc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5c37f7a13c271ecb6e5a866ea04cdedad952da520e99e0cba0cbc34549f2fd75/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5c37f7a13c271ecb6e5a866ea04cdedad952da520e99e0cba0cbc34549f2fd75/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5c37f7a13c271ecb6e5a866ea04cdedad952da520e99e0cba0cbc34549f2fd75/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-177998",
	                "Source": "/var/lib/docker/volumes/addons-177998/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-177998",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-177998",
	                "name.minikube.sigs.k8s.io": "addons-177998",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fcd529918dc8229ec8a14529dbf4ae2d92130c18352d82a20722b9bb641475d5",
	            "SandboxKey": "/var/run/docker/netns/fcd529918dc8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34600"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34601"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34604"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34602"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34603"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-177998": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "19e67a6599deee7485939663658b47300858ad6be1ef7a9abf09d7eb7eba7567",
	                    "EndpointID": "74fb35ccc1b99b398bdd521b351edda008d5fd38a090f82aee77223fcdba796c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-177998",
	                        "f371aab23012"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-177998 -n addons-177998
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-177998 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-177998 logs -n 25: (1.33357707s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-660423                                                                     | download-only-660423   | jenkins | v1.33.1 | 15 Aug 24 00:39 UTC | 15 Aug 24 00:39 UTC |
	| start   | --download-only -p                                                                          | download-docker-283129 | jenkins | v1.33.1 | 15 Aug 24 00:39 UTC |                     |
	|         | download-docker-283129                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-283129                                                                   | download-docker-283129 | jenkins | v1.33.1 | 15 Aug 24 00:39 UTC | 15 Aug 24 00:39 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-338566   | jenkins | v1.33.1 | 15 Aug 24 00:39 UTC |                     |
	|         | binary-mirror-338566                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:37403                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-338566                                                                     | binary-mirror-338566   | jenkins | v1.33.1 | 15 Aug 24 00:39 UTC | 15 Aug 24 00:39 UTC |
	| addons  | disable dashboard -p                                                                        | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:39 UTC |                     |
	|         | addons-177998                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:39 UTC |                     |
	|         | addons-177998                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-177998 --wait=true                                                                | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:39 UTC | 15 Aug 24 00:42 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-177998 addons disable                                                                | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:42 UTC | 15 Aug 24 00:43 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-177998 ip                                                                            | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:43 UTC | 15 Aug 24 00:43 UTC |
	| addons  | addons-177998 addons disable                                                                | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:43 UTC | 15 Aug 24 00:43 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-177998 addons disable                                                                | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:43 UTC | 15 Aug 24 00:43 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:43 UTC | 15 Aug 24 00:43 UTC |
	|         | -p addons-177998                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-177998 ssh cat                                                                       | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:43 UTC | 15 Aug 24 00:43 UTC |
	|         | /opt/local-path-provisioner/pvc-2ebb18e5-943e-4735-a7ec-2a8e78491a99_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-177998 addons disable                                                                | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:43 UTC | 15 Aug 24 00:43 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-177998 addons                                                                        | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:43 UTC | 15 Aug 24 00:44 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:43 UTC | 15 Aug 24 00:43 UTC |
	|         | addons-177998                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:43 UTC | 15 Aug 24 00:43 UTC |
	|         | -p addons-177998                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-177998 addons                                                                        | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:44 UTC | 15 Aug 24 00:44 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-177998 addons disable                                                                | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:44 UTC | 15 Aug 24 00:44 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:44 UTC | 15 Aug 24 00:44 UTC |
	|         | addons-177998                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-177998 ssh curl -s                                                                   | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:44 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-177998 ip                                                                            | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:46 UTC | 15 Aug 24 00:46 UTC |
	| addons  | addons-177998 addons disable                                                                | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:46 UTC | 15 Aug 24 00:46 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-177998 addons disable                                                                | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:46 UTC | 15 Aug 24 00:46 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 00:39:06
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 00:39:06.663435 1405068 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:39:06.663683 1405068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:39:06.663712 1405068 out.go:304] Setting ErrFile to fd 2...
	I0815 00:39:06.663732 1405068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:39:06.664028 1405068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-1398913/.minikube/bin
	I0815 00:39:06.664528 1405068 out.go:298] Setting JSON to false
	I0815 00:39:06.665471 1405068 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":33689,"bootTime":1723648658,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0815 00:39:06.665590 1405068 start.go:139] virtualization:  
	I0815 00:39:06.668036 1405068 out.go:177] * [addons-177998] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0815 00:39:06.670279 1405068 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 00:39:06.670349 1405068 notify.go:220] Checking for updates...
	I0815 00:39:06.673720 1405068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 00:39:06.675387 1405068 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-1398913/kubeconfig
	I0815 00:39:06.677145 1405068 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-1398913/.minikube
	I0815 00:39:06.678840 1405068 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0815 00:39:06.680662 1405068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 00:39:06.682694 1405068 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 00:39:06.704497 1405068 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 00:39:06.704615 1405068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:39:06.769250 1405068 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-15 00:39:06.758683232 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214908928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 00:39:06.769373 1405068 docker.go:307] overlay module found
	I0815 00:39:06.771848 1405068 out.go:177] * Using the docker driver based on user configuration
	I0815 00:39:06.773693 1405068 start.go:297] selected driver: docker
	I0815 00:39:06.773713 1405068 start.go:901] validating driver "docker" against <nil>
	I0815 00:39:06.773727 1405068 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 00:39:06.774365 1405068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:39:06.826851 1405068 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-15 00:39:06.817180385 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214908928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 00:39:06.827037 1405068 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 00:39:06.827282 1405068 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 00:39:06.829488 1405068 out.go:177] * Using Docker driver with root privileges
	I0815 00:39:06.831656 1405068 cni.go:84] Creating CNI manager for ""
	I0815 00:39:06.831681 1405068 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0815 00:39:06.831695 1405068 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0815 00:39:06.831788 1405068 start.go:340] cluster config:
	{Name:addons-177998 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-177998 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:39:06.833954 1405068 out.go:177] * Starting "addons-177998" primary control-plane node in "addons-177998" cluster
	I0815 00:39:06.835832 1405068 cache.go:121] Beginning downloading kic base image for docker with crio
	I0815 00:39:06.837986 1405068 out.go:177] * Pulling base image v0.0.44-1723650208-19443 ...
	I0815 00:39:06.839688 1405068 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:39:06.839732 1405068 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	I0815 00:39:06.839744 1405068 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19443-1398913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	I0815 00:39:06.839753 1405068 cache.go:56] Caching tarball of preloaded images
	I0815 00:39:06.839831 1405068 preload.go:172] Found /home/jenkins/minikube-integration/19443-1398913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0815 00:39:06.839841 1405068 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 00:39:06.840226 1405068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/config.json ...
	I0815 00:39:06.840282 1405068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/config.json: {Name:mk96a96c05a74a2b5c03f13fa38572f835869738 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:39:06.857568 1405068 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 00:39:06.857758 1405068 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0815 00:39:06.857790 1405068 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory, skipping pull
	I0815 00:39:06.857795 1405068 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 exists in cache, skipping pull
	I0815 00:39:06.857813 1405068 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 as a tarball
	I0815 00:39:06.857819 1405068 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from local cache
	I0815 00:39:23.609566 1405068 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from cached tarball
	I0815 00:39:23.609608 1405068 cache.go:194] Successfully downloaded all kic artifacts
	I0815 00:39:23.609650 1405068 start.go:360] acquireMachinesLock for addons-177998: {Name:mk8732f60cab24aa263ea51a6dc6ae45b69ed64f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 00:39:23.610415 1405068 start.go:364] duration metric: took 716.155µs to acquireMachinesLock for "addons-177998"
	I0815 00:39:23.610460 1405068 start.go:93] Provisioning new machine with config: &{Name:addons-177998 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-177998 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 00:39:23.610561 1405068 start.go:125] createHost starting for "" (driver="docker")
	I0815 00:39:23.613009 1405068 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0815 00:39:23.613258 1405068 start.go:159] libmachine.API.Create for "addons-177998" (driver="docker")
	I0815 00:39:23.613292 1405068 client.go:168] LocalClient.Create starting
	I0815 00:39:23.613401 1405068 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem
	I0815 00:39:24.150713 1405068 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/cert.pem
	I0815 00:39:24.620546 1405068 cli_runner.go:164] Run: docker network inspect addons-177998 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0815 00:39:24.636727 1405068 cli_runner.go:211] docker network inspect addons-177998 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0815 00:39:24.636817 1405068 network_create.go:284] running [docker network inspect addons-177998] to gather additional debugging logs...
	I0815 00:39:24.636837 1405068 cli_runner.go:164] Run: docker network inspect addons-177998
	W0815 00:39:24.650962 1405068 cli_runner.go:211] docker network inspect addons-177998 returned with exit code 1
	I0815 00:39:24.650995 1405068 network_create.go:287] error running [docker network inspect addons-177998]: docker network inspect addons-177998: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-177998 not found
	I0815 00:39:24.651010 1405068 network_create.go:289] output of [docker network inspect addons-177998]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-177998 not found
	
	** /stderr **
	I0815 00:39:24.651106 1405068 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0815 00:39:24.669163 1405068 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400183be60}
	I0815 00:39:24.669205 1405068 network_create.go:124] attempt to create docker network addons-177998 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0815 00:39:24.669260 1405068 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-177998 addons-177998
	I0815 00:39:24.745263 1405068 network_create.go:108] docker network addons-177998 192.168.49.0/24 created
	I0815 00:39:24.745298 1405068 kic.go:121] calculated static IP "192.168.49.2" for the "addons-177998" container
	I0815 00:39:24.745375 1405068 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0815 00:39:24.760833 1405068 cli_runner.go:164] Run: docker volume create addons-177998 --label name.minikube.sigs.k8s.io=addons-177998 --label created_by.minikube.sigs.k8s.io=true
	I0815 00:39:24.779056 1405068 oci.go:103] Successfully created a docker volume addons-177998
	I0815 00:39:24.779159 1405068 cli_runner.go:164] Run: docker run --rm --name addons-177998-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-177998 --entrypoint /usr/bin/test -v addons-177998:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -d /var/lib
	I0815 00:39:26.844938 1405068 cli_runner.go:217] Completed: docker run --rm --name addons-177998-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-177998 --entrypoint /usr/bin/test -v addons-177998:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -d /var/lib: (2.065719274s)
	I0815 00:39:26.844972 1405068 oci.go:107] Successfully prepared a docker volume addons-177998
	I0815 00:39:26.844985 1405068 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:39:26.845005 1405068 kic.go:194] Starting extracting preloaded images to volume ...
	I0815 00:39:26.845091 1405068 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19443-1398913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-177998:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -I lz4 -xf /preloaded.tar -C /extractDir
	I0815 00:39:31.817942 1405068 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19443-1398913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-177998:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -I lz4 -xf /preloaded.tar -C /extractDir: (4.972796187s)
	I0815 00:39:31.817981 1405068 kic.go:203] duration metric: took 4.972972653s to extract preloaded images to volume ...
	W0815 00:39:31.818121 1405068 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0815 00:39:31.818244 1405068 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0815 00:39:31.870702 1405068 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-177998 --name addons-177998 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-177998 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-177998 --network addons-177998 --ip 192.168.49.2 --volume addons-177998:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002
	I0815 00:39:32.211825 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Running}}
	I0815 00:39:32.232384 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:39:32.259982 1405068 cli_runner.go:164] Run: docker exec addons-177998 stat /var/lib/dpkg/alternatives/iptables
	I0815 00:39:32.338794 1405068 oci.go:144] the created container "addons-177998" has a running status.
	I0815 00:39:32.338824 1405068 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa...
	I0815 00:39:32.486700 1405068 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0815 00:39:32.510882 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:39:32.532865 1405068 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0815 00:39:32.532885 1405068 kic_runner.go:114] Args: [docker exec --privileged addons-177998 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0815 00:39:32.598012 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:39:32.623008 1405068 machine.go:94] provisionDockerMachine start ...
	I0815 00:39:32.623101 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:39:32.651978 1405068 main.go:141] libmachine: Using SSH client type: native
	I0815 00:39:32.652232 1405068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34600 <nil> <nil>}
	I0815 00:39:32.652241 1405068 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 00:39:32.652894 1405068 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53966->127.0.0.1:34600: read: connection reset by peer
	I0815 00:39:35.786107 1405068 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-177998
	
	I0815 00:39:35.786135 1405068 ubuntu.go:169] provisioning hostname "addons-177998"
	I0815 00:39:35.786209 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:39:35.803627 1405068 main.go:141] libmachine: Using SSH client type: native
	I0815 00:39:35.803882 1405068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34600 <nil> <nil>}
	I0815 00:39:35.803899 1405068 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-177998 && echo "addons-177998" | sudo tee /etc/hostname
	I0815 00:39:35.951415 1405068 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-177998
	
	I0815 00:39:35.951500 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:39:35.968769 1405068 main.go:141] libmachine: Using SSH client type: native
	I0815 00:39:35.969024 1405068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34600 <nil> <nil>}
	I0815 00:39:35.969046 1405068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-177998' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-177998/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-177998' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 00:39:36.114811 1405068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 00:39:36.114897 1405068 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19443-1398913/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-1398913/.minikube}
	I0815 00:39:36.114943 1405068 ubuntu.go:177] setting up certificates
	I0815 00:39:36.114979 1405068 provision.go:84] configureAuth start
	I0815 00:39:36.115058 1405068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-177998
	I0815 00:39:36.131944 1405068 provision.go:143] copyHostCerts
	I0815 00:39:36.132047 1405068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.pem (1082 bytes)
	I0815 00:39:36.132179 1405068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-1398913/.minikube/cert.pem (1123 bytes)
	I0815 00:39:36.132243 1405068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-1398913/.minikube/key.pem (1679 bytes)
	I0815 00:39:36.132304 1405068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca-key.pem org=jenkins.addons-177998 san=[127.0.0.1 192.168.49.2 addons-177998 localhost minikube]
	I0815 00:39:36.507093 1405068 provision.go:177] copyRemoteCerts
	I0815 00:39:36.507168 1405068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 00:39:36.507210 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:39:36.523755 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:39:36.620286 1405068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 00:39:36.645918 1405068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 00:39:36.671640 1405068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 00:39:36.696376 1405068 provision.go:87] duration metric: took 581.368275ms to configureAuth
	I0815 00:39:36.696403 1405068 ubuntu.go:193] setting minikube options for container-runtime
	I0815 00:39:36.696597 1405068 config.go:182] Loaded profile config "addons-177998": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:39:36.696719 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:39:36.713720 1405068 main.go:141] libmachine: Using SSH client type: native
	I0815 00:39:36.713978 1405068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34600 <nil> <nil>}
	I0815 00:39:36.713994 1405068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 00:39:36.948596 1405068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 00:39:36.948618 1405068 machine.go:97] duration metric: took 4.325592304s to provisionDockerMachine
	I0815 00:39:36.948628 1405068 client.go:171] duration metric: took 13.33533055s to LocalClient.Create
	I0815 00:39:36.948642 1405068 start.go:167] duration metric: took 13.335384565s to libmachine.API.Create "addons-177998"
	I0815 00:39:36.948649 1405068 start.go:293] postStartSetup for "addons-177998" (driver="docker")
	I0815 00:39:36.948665 1405068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 00:39:36.948748 1405068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 00:39:36.948795 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:39:36.966990 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:39:37.072349 1405068 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 00:39:37.075806 1405068 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0815 00:39:37.075841 1405068 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0815 00:39:37.075852 1405068 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0815 00:39:37.075859 1405068 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0815 00:39:37.075870 1405068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-1398913/.minikube/addons for local assets ...
	I0815 00:39:37.075942 1405068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-1398913/.minikube/files for local assets ...
	I0815 00:39:37.075964 1405068 start.go:296] duration metric: took 127.308685ms for postStartSetup
	I0815 00:39:37.076288 1405068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-177998
	I0815 00:39:37.092489 1405068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/config.json ...
	I0815 00:39:37.092798 1405068 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:39:37.092842 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:39:37.109522 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:39:37.203168 1405068 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0815 00:39:37.207415 1405068 start.go:128] duration metric: took 13.596836018s to createHost
	I0815 00:39:37.207444 1405068 start.go:83] releasing machines lock for "addons-177998", held for 13.597006297s
	I0815 00:39:37.207517 1405068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-177998
	I0815 00:39:37.228753 1405068 ssh_runner.go:195] Run: cat /version.json
	I0815 00:39:37.228817 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:39:37.229057 1405068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 00:39:37.229117 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:39:37.253491 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:39:37.254482 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:39:37.345931 1405068 ssh_runner.go:195] Run: systemctl --version
	I0815 00:39:37.476469 1405068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 00:39:37.619185 1405068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 00:39:37.623434 1405068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 00:39:37.643128 1405068 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0815 00:39:37.643207 1405068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 00:39:37.675635 1405068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0815 00:39:37.675711 1405068 start.go:495] detecting cgroup driver to use...
	I0815 00:39:37.675760 1405068 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0815 00:39:37.675844 1405068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 00:39:37.691697 1405068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 00:39:37.703404 1405068 docker.go:217] disabling cri-docker service (if available) ...
	I0815 00:39:37.703488 1405068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 00:39:37.717725 1405068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 00:39:37.732729 1405068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 00:39:37.822468 1405068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 00:39:37.927047 1405068 docker.go:233] disabling docker service ...
	I0815 00:39:37.927141 1405068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 00:39:37.947551 1405068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 00:39:37.963430 1405068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 00:39:38.064078 1405068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 00:39:38.165031 1405068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 00:39:38.176454 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 00:39:38.194244 1405068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 00:39:38.194356 1405068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:39:38.204489 1405068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 00:39:38.204575 1405068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:39:38.214847 1405068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:39:38.225006 1405068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:39:38.235204 1405068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 00:39:38.244680 1405068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:39:38.254960 1405068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:39:38.271170 1405068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:39:38.281068 1405068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 00:39:38.289951 1405068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 00:39:38.298253 1405068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:39:38.383214 1405068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 00:39:38.501507 1405068 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 00:39:38.501687 1405068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 00:39:38.505695 1405068 start.go:563] Will wait 60s for crictl version
	I0815 00:39:38.505778 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:39:38.509206 1405068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 00:39:38.547942 1405068 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0815 00:39:38.548053 1405068 ssh_runner.go:195] Run: crio --version
	I0815 00:39:38.588730 1405068 ssh_runner.go:195] Run: crio --version
	I0815 00:39:38.634077 1405068 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0815 00:39:38.636156 1405068 cli_runner.go:164] Run: docker network inspect addons-177998 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0815 00:39:38.650117 1405068 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0815 00:39:38.653667 1405068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 00:39:38.664210 1405068 kubeadm.go:883] updating cluster {Name:addons-177998 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-177998 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 00:39:38.664337 1405068 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:39:38.664398 1405068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 00:39:38.746727 1405068 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 00:39:38.746753 1405068 crio.go:433] Images already preloaded, skipping extraction
	I0815 00:39:38.746810 1405068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 00:39:38.783967 1405068 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 00:39:38.783990 1405068 cache_images.go:84] Images are preloaded, skipping loading
	I0815 00:39:38.783999 1405068 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0815 00:39:38.784099 1405068 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-177998 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-177998 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 00:39:38.784182 1405068 ssh_runner.go:195] Run: crio config
	I0815 00:39:38.834517 1405068 cni.go:84] Creating CNI manager for ""
	I0815 00:39:38.834541 1405068 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0815 00:39:38.834555 1405068 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 00:39:38.834578 1405068 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-177998 NodeName:addons-177998 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 00:39:38.834729 1405068 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-177998"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 00:39:38.834801 1405068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 00:39:38.843622 1405068 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 00:39:38.843688 1405068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 00:39:38.852416 1405068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0815 00:39:38.870695 1405068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 00:39:38.888484 1405068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0815 00:39:38.905935 1405068 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0815 00:39:38.909139 1405068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 00:39:38.919754 1405068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:39:39.010933 1405068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 00:39:39.026856 1405068 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998 for IP: 192.168.49.2
	I0815 00:39:39.026882 1405068 certs.go:194] generating shared ca certs ...
	I0815 00:39:39.026903 1405068 certs.go:226] acquiring lock for ca certs: {Name:mk7828e60149aaf109ce40cae2b300a118fa9ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:39:39.027089 1405068 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.key
	I0815 00:39:39.673208 1405068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.crt ...
	I0815 00:39:39.673239 1405068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.crt: {Name:mk659c4665d9208d9ef76dc441880ade749b2196 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:39:39.673916 1405068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.key ...
	I0815 00:39:39.673942 1405068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.key: {Name:mk7c20db56fb05eddb03e1dd8e898401e59f742b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:39:39.674521 1405068 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/proxy-client-ca.key
	I0815 00:39:41.073877 1405068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-1398913/.minikube/proxy-client-ca.crt ...
	I0815 00:39:41.073914 1405068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-1398913/.minikube/proxy-client-ca.crt: {Name:mk724f5e477d1fd6ee1f18d46c189d4c01d6ab13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:39:41.074117 1405068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-1398913/.minikube/proxy-client-ca.key ...
	I0815 00:39:41.074130 1405068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-1398913/.minikube/proxy-client-ca.key: {Name:mka2c5a0dd947cf79d708cfa9e77fea7155b512c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:39:41.074648 1405068 certs.go:256] generating profile certs ...
	I0815 00:39:41.074721 1405068 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.key
	I0815 00:39:41.074741 1405068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt with IP's: []
	I0815 00:39:41.752228 1405068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt ...
	I0815 00:39:41.752262 1405068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: {Name:mkd09339e85b8a905e0a8958e21d9c814968d75e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:39:41.752463 1405068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.key ...
	I0815 00:39:41.752477 1405068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.key: {Name:mk55bc47ab12616c4255067ff759edcf2329fb1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:39:41.753105 1405068 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/apiserver.key.947b2780
	I0815 00:39:41.753133 1405068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/apiserver.crt.947b2780 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0815 00:39:42.725554 1405068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/apiserver.crt.947b2780 ...
	I0815 00:39:42.725632 1405068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/apiserver.crt.947b2780: {Name:mk1436a0e1ea41f2fbf1ac4f9e43de05848aff73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:39:42.725867 1405068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/apiserver.key.947b2780 ...
	I0815 00:39:42.725916 1405068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/apiserver.key.947b2780: {Name:mkf95c7cc21a34bed24d68ecd28119b08a923a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:39:42.726681 1405068 certs.go:381] copying /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/apiserver.crt.947b2780 -> /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/apiserver.crt
	I0815 00:39:42.726851 1405068 certs.go:385] copying /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/apiserver.key.947b2780 -> /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/apiserver.key
	I0815 00:39:42.726976 1405068 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/proxy-client.key
	I0815 00:39:42.727020 1405068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/proxy-client.crt with IP's: []
	I0815 00:39:43.649164 1405068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/proxy-client.crt ...
	I0815 00:39:43.649200 1405068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/proxy-client.crt: {Name:mk36d3fb042007f51c19030fb1645bce43faeb2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:39:43.649398 1405068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/proxy-client.key ...
	I0815 00:39:43.649415 1405068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/proxy-client.key: {Name:mk79bdfa1392256a1100760467793cf2016af1ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:39:43.649602 1405068 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 00:39:43.649649 1405068 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem (1082 bytes)
	I0815 00:39:43.649680 1405068 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/cert.pem (1123 bytes)
	I0815 00:39:43.649708 1405068 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/key.pem (1679 bytes)
	I0815 00:39:43.650307 1405068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 00:39:43.675240 1405068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 00:39:43.699115 1405068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 00:39:43.723872 1405068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 00:39:43.748538 1405068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0815 00:39:43.773397 1405068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 00:39:43.798046 1405068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 00:39:43.823599 1405068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 00:39:43.848242 1405068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 00:39:43.872172 1405068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 00:39:43.890080 1405068 ssh_runner.go:195] Run: openssl version
	I0815 00:39:43.895603 1405068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 00:39:43.905057 1405068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:39:43.908592 1405068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:39 /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:39:43.908671 1405068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:39:43.915663 1405068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 00:39:43.925215 1405068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 00:39:43.928491 1405068 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 00:39:43.928543 1405068 kubeadm.go:392] StartCluster: {Name:addons-177998 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-177998 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:39:43.928627 1405068 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 00:39:43.928687 1405068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 00:39:43.965177 1405068 cri.go:89] found id: ""
	I0815 00:39:43.965286 1405068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 00:39:43.974109 1405068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 00:39:43.983003 1405068 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0815 00:39:43.983072 1405068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 00:39:43.992064 1405068 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 00:39:43.992098 1405068 kubeadm.go:157] found existing configuration files:
	
	I0815 00:39:43.992189 1405068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 00:39:44.001596 1405068 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 00:39:44.001770 1405068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 00:39:44.017829 1405068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 00:39:44.027370 1405068 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 00:39:44.027479 1405068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 00:39:44.036435 1405068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 00:39:44.045908 1405068 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 00:39:44.045983 1405068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 00:39:44.055213 1405068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 00:39:44.064441 1405068 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 00:39:44.064531 1405068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 00:39:44.073352 1405068 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0815 00:39:44.114035 1405068 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 00:39:44.114117 1405068 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 00:39:44.133839 1405068 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0815 00:39:44.133916 1405068 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-aws
	I0815 00:39:44.133957 1405068 kubeadm.go:310] OS: Linux
	I0815 00:39:44.134004 1405068 kubeadm.go:310] CGROUPS_CPU: enabled
	I0815 00:39:44.134055 1405068 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0815 00:39:44.134104 1405068 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0815 00:39:44.134154 1405068 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0815 00:39:44.134203 1405068 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0815 00:39:44.134258 1405068 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0815 00:39:44.134305 1405068 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0815 00:39:44.134355 1405068 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0815 00:39:44.134415 1405068 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0815 00:39:44.201757 1405068 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 00:39:44.201878 1405068 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 00:39:44.201974 1405068 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 00:39:44.210830 1405068 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 00:39:44.215296 1405068 out.go:204]   - Generating certificates and keys ...
	I0815 00:39:44.215397 1405068 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 00:39:44.215470 1405068 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 00:39:44.480998 1405068 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0815 00:39:45.003152 1405068 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0815 00:39:45.539961 1405068 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0815 00:39:45.904140 1405068 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0815 00:39:46.259344 1405068 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0815 00:39:46.259676 1405068 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-177998 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0815 00:39:47.173704 1405068 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0815 00:39:47.173850 1405068 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-177998 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0815 00:39:47.933105 1405068 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0815 00:39:48.414713 1405068 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0815 00:39:49.406063 1405068 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0815 00:39:49.406294 1405068 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 00:39:49.785304 1405068 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 00:39:50.298827 1405068 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 00:39:50.785106 1405068 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 00:39:51.037279 1405068 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 00:39:51.650179 1405068 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 00:39:51.650966 1405068 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 00:39:51.654070 1405068 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 00:39:51.656405 1405068 out.go:204]   - Booting up control plane ...
	I0815 00:39:51.656510 1405068 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 00:39:51.656591 1405068 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 00:39:51.657344 1405068 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 00:39:51.672944 1405068 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 00:39:51.679100 1405068 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 00:39:51.679380 1405068 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 00:39:51.779425 1405068 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 00:39:51.779978 1405068 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 00:39:53.282180 1405068 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.502227525s
	I0815 00:39:53.282278 1405068 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 00:39:59.783576 1405068 kubeadm.go:310] [api-check] The API server is healthy after 6.501405475s
	I0815 00:39:59.804259 1405068 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 00:39:59.819242 1405068 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 00:39:59.843823 1405068 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 00:39:59.844014 1405068 kubeadm.go:310] [mark-control-plane] Marking the node addons-177998 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 00:39:59.854635 1405068 kubeadm.go:310] [bootstrap-token] Using token: py4gee.yjt56wgqozwbhy5y
	I0815 00:39:59.857957 1405068 out.go:204]   - Configuring RBAC rules ...
	I0815 00:39:59.858088 1405068 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 00:39:59.862181 1405068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 00:39:59.870590 1405068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 00:39:59.875596 1405068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 00:39:59.879516 1405068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 00:39:59.883337 1405068 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 00:40:00.207046 1405068 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 00:40:00.818575 1405068 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 00:40:01.192091 1405068 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 00:40:01.194626 1405068 kubeadm.go:310] 
	I0815 00:40:01.194715 1405068 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 00:40:01.194726 1405068 kubeadm.go:310] 
	I0815 00:40:01.194804 1405068 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 00:40:01.194812 1405068 kubeadm.go:310] 
	I0815 00:40:01.194837 1405068 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 00:40:01.194899 1405068 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 00:40:01.194950 1405068 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 00:40:01.194961 1405068 kubeadm.go:310] 
	I0815 00:40:01.195013 1405068 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 00:40:01.195023 1405068 kubeadm.go:310] 
	I0815 00:40:01.195069 1405068 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 00:40:01.195077 1405068 kubeadm.go:310] 
	I0815 00:40:01.195127 1405068 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 00:40:01.195202 1405068 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 00:40:01.195271 1405068 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 00:40:01.195279 1405068 kubeadm.go:310] 
	I0815 00:40:01.195359 1405068 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 00:40:01.195438 1405068 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 00:40:01.195445 1405068 kubeadm.go:310] 
	I0815 00:40:01.195525 1405068 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token py4gee.yjt56wgqozwbhy5y \
	I0815 00:40:01.195628 1405068 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e6084f0db819136e4eac5633399139c1200997e817605c079edabc35a775495a \
	I0815 00:40:01.195651 1405068 kubeadm.go:310] 	--control-plane 
	I0815 00:40:01.195658 1405068 kubeadm.go:310] 
	I0815 00:40:01.195740 1405068 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 00:40:01.195748 1405068 kubeadm.go:310] 
	I0815 00:40:01.195827 1405068 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token py4gee.yjt56wgqozwbhy5y \
	I0815 00:40:01.195928 1405068 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e6084f0db819136e4eac5633399139c1200997e817605c079edabc35a775495a 
	I0815 00:40:01.201146 1405068 kubeadm.go:310] W0815 00:39:44.110804    1203 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 00:40:01.201442 1405068 kubeadm.go:310] W0815 00:39:44.111701    1203 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 00:40:01.201656 1405068 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1067-aws\n", err: exit status 1
	I0815 00:40:01.201769 1405068 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 00:40:01.201791 1405068 cni.go:84] Creating CNI manager for ""
	I0815 00:40:01.201805 1405068 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0815 00:40:01.203954 1405068 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0815 00:40:01.205830 1405068 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0815 00:40:01.210744 1405068 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0815 00:40:01.210766 1405068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0815 00:40:01.233214 1405068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0815 00:40:01.528947 1405068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 00:40:01.529098 1405068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:40:01.529101 1405068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-177998 minikube.k8s.io/updated_at=2024_08_15T00_40_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168 minikube.k8s.io/name=addons-177998 minikube.k8s.io/primary=true
	I0815 00:40:01.545264 1405068 ops.go:34] apiserver oom_adj: -16
	I0815 00:40:01.644762 1405068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:40:02.145805 1405068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:40:02.644876 1405068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:40:03.144896 1405068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:40:03.645370 1405068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:40:04.144807 1405068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:40:04.645732 1405068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:40:05.145444 1405068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:40:05.644836 1405068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:40:05.738045 1405068 kubeadm.go:1113] duration metric: took 4.209008953s to wait for elevateKubeSystemPrivileges
	I0815 00:40:05.738078 1405068 kubeadm.go:394] duration metric: took 21.809539468s to StartCluster
	I0815 00:40:05.738100 1405068 settings.go:142] acquiring lock: {Name:mk702991e0e1159812b2000a3112e7b24af8d662 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:40:05.739032 1405068 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-1398913/kubeconfig
	I0815 00:40:05.739430 1405068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-1398913/kubeconfig: {Name:mkbc924cd270a9bf83bc63fe6d76f87df76fc38b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:40:05.739637 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0815 00:40:05.739658 1405068 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 00:40:05.739931 1405068 config.go:182] Loaded profile config "addons-177998": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:40:05.739961 1405068 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0815 00:40:05.740054 1405068 addons.go:69] Setting yakd=true in profile "addons-177998"
	I0815 00:40:05.740077 1405068 addons.go:234] Setting addon yakd=true in "addons-177998"
	I0815 00:40:05.740104 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:05.740545 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.741012 1405068 addons.go:69] Setting cloud-spanner=true in profile "addons-177998"
	I0815 00:40:05.741048 1405068 addons.go:234] Setting addon cloud-spanner=true in "addons-177998"
	I0815 00:40:05.741067 1405068 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-177998"
	I0815 00:40:05.741103 1405068 addons.go:69] Setting storage-provisioner=true in profile "addons-177998"
	I0815 00:40:05.741124 1405068 addons.go:234] Setting addon storage-provisioner=true in "addons-177998"
	I0815 00:40:05.741143 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:05.741163 1405068 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-177998"
	I0815 00:40:05.741217 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:05.741591 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.741723 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.743237 1405068 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-177998"
	I0815 00:40:05.743306 1405068 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-177998"
	I0815 00:40:05.743336 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:05.743788 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.744292 1405068 addons.go:69] Setting default-storageclass=true in profile "addons-177998"
	I0815 00:40:05.744331 1405068 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-177998"
	I0815 00:40:05.744600 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.754730 1405068 addons.go:69] Setting gcp-auth=true in profile "addons-177998"
	I0815 00:40:05.754798 1405068 mustload.go:65] Loading cluster: addons-177998
	I0815 00:40:05.755022 1405068 config.go:182] Loaded profile config "addons-177998": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:40:05.755440 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.762167 1405068 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-177998"
	I0815 00:40:05.762320 1405068 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-177998"
	I0815 00:40:05.763075 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.767612 1405068 addons.go:69] Setting ingress=true in profile "addons-177998"
	I0815 00:40:05.767659 1405068 addons.go:234] Setting addon ingress=true in "addons-177998"
	I0815 00:40:05.767703 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:05.768150 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.779925 1405068 addons.go:69] Setting volcano=true in profile "addons-177998"
	I0815 00:40:05.779973 1405068 addons.go:234] Setting addon volcano=true in "addons-177998"
	I0815 00:40:05.780009 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:05.780463 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.798768 1405068 addons.go:69] Setting ingress-dns=true in profile "addons-177998"
	I0815 00:40:05.798861 1405068 addons.go:234] Setting addon ingress-dns=true in "addons-177998"
	I0815 00:40:05.798919 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:05.799391 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.812397 1405068 addons.go:69] Setting inspektor-gadget=true in profile "addons-177998"
	I0815 00:40:05.812523 1405068 addons.go:69] Setting volumesnapshots=true in profile "addons-177998"
	I0815 00:40:05.812589 1405068 addons.go:234] Setting addon volumesnapshots=true in "addons-177998"
	I0815 00:40:05.812672 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:05.812718 1405068 addons.go:234] Setting addon inspektor-gadget=true in "addons-177998"
	I0815 00:40:05.812849 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:05.813251 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.822810 1405068 out.go:177] * Verifying Kubernetes components...
	I0815 00:40:05.825525 1405068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:40:05.837404 1405068 addons.go:69] Setting metrics-server=true in profile "addons-177998"
	I0815 00:40:05.837505 1405068 addons.go:234] Setting addon metrics-server=true in "addons-177998"
	I0815 00:40:05.837579 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:05.838205 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.841808 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.741079 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:05.848147 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.741096 1405068 addons.go:69] Setting registry=true in profile "addons-177998"
	I0815 00:40:05.849638 1405068 addons.go:234] Setting addon registry=true in "addons-177998"
	I0815 00:40:05.849692 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:05.850197 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.862711 1405068 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0815 00:40:05.862940 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:05.867350 1405068 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 00:40:05.869369 1405068 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 00:40:05.869387 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 00:40:05.869447 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:40:05.864873 1405068 addons.go:234] Setting addon default-storageclass=true in "addons-177998"
	I0815 00:40:05.870607 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:05.871043 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.897478 1405068 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0815 00:40:05.897502 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0815 00:40:05.897570 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:40:05.899011 1405068 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	W0815 00:40:05.900463 1405068 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0815 00:40:05.919468 1405068 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0815 00:40:05.922193 1405068 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0815 00:40:05.924099 1405068 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0815 00:40:05.925950 1405068 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0815 00:40:05.928114 1405068 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0815 00:40:05.928161 1405068 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0815 00:40:05.928240 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:40:05.929012 1405068 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-177998"
	I0815 00:40:05.929046 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:05.929445 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.928130 1405068 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0815 00:40:05.954529 1405068 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0815 00:40:05.959494 1405068 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0815 00:40:05.960705 1405068 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 00:40:05.970996 1405068 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 00:40:05.989836 1405068 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0815 00:40:06.000891 1405068 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0815 00:40:06.004583 1405068 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0815 00:40:06.004666 1405068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0815 00:40:06.004786 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:40:06.005019 1405068 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0815 00:40:06.005060 1405068 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0815 00:40:06.005145 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:40:06.036154 1405068 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0815 00:40:06.036323 1405068 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0815 00:40:06.042017 1405068 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0815 00:40:06.042088 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0815 00:40:06.042196 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:40:06.042587 1405068 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0815 00:40:06.042630 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0815 00:40:06.042713 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:40:06.100589 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:40:06.100613 1405068 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 00:40:06.100629 1405068 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 00:40:06.100700 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:40:06.106709 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:40:06.108176 1405068 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0815 00:40:06.108619 1405068 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0815 00:40:06.109970 1405068 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0815 00:40:06.110923 1405068 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0815 00:40:06.113080 1405068 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0815 00:40:06.113100 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0815 00:40:06.113172 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:40:06.117677 1405068 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0815 00:40:06.117708 1405068 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0815 00:40:06.117786 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:40:06.120681 1405068 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 00:40:06.120709 1405068 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 00:40:06.120779 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:40:06.150522 1405068 out.go:177]   - Using image docker.io/busybox:stable
	I0815 00:40:06.150876 1405068 out.go:177]   - Using image docker.io/registry:2.8.3
	I0815 00:40:06.151451 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:40:06.160032 1405068 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0815 00:40:06.160054 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0815 00:40:06.160120 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:40:06.162110 1405068 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0815 00:40:06.166623 1405068 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0815 00:40:06.166661 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0815 00:40:06.166745 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:40:06.205416 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:40:06.206332 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:40:06.224063 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:40:06.226433 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:40:06.231789 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:40:06.243709 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:40:06.246369 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:40:06.290507 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:40:06.292829 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:40:06.306610 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:40:06.462219 1405068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 00:40:06.467044 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0815 00:40:06.467144 1405068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 00:40:06.493291 1405068 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0815 00:40:06.493316 1405068 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0815 00:40:06.551137 1405068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0815 00:40:06.609221 1405068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0815 00:40:06.680236 1405068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 00:40:06.682509 1405068 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0815 00:40:06.682533 1405068 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0815 00:40:06.686177 1405068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0815 00:40:06.700144 1405068 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0815 00:40:06.700170 1405068 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0815 00:40:06.703583 1405068 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0815 00:40:06.703604 1405068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0815 00:40:06.709199 1405068 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0815 00:40:06.709224 1405068 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0815 00:40:06.727470 1405068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0815 00:40:06.745916 1405068 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0815 00:40:06.745949 1405068 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0815 00:40:06.747746 1405068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0815 00:40:06.757669 1405068 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 00:40:06.757697 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0815 00:40:06.873990 1405068 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0815 00:40:06.874016 1405068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0815 00:40:06.890242 1405068 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0815 00:40:06.890266 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0815 00:40:06.904622 1405068 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0815 00:40:06.904648 1405068 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0815 00:40:06.908034 1405068 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0815 00:40:06.908063 1405068 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0815 00:40:06.933834 1405068 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0815 00:40:06.933860 1405068 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0815 00:40:07.005712 1405068 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 00:40:07.005743 1405068 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 00:40:07.047039 1405068 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0815 00:40:07.047071 1405068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0815 00:40:07.074256 1405068 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0815 00:40:07.074281 1405068 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0815 00:40:07.096973 1405068 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0815 00:40:07.096999 1405068 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0815 00:40:07.186125 1405068 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0815 00:40:07.186148 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0815 00:40:07.203350 1405068 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0815 00:40:07.203388 1405068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0815 00:40:07.206558 1405068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0815 00:40:07.208132 1405068 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0815 00:40:07.208154 1405068 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0815 00:40:07.225691 1405068 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0815 00:40:07.225717 1405068 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0815 00:40:07.265874 1405068 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 00:40:07.265905 1405068 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 00:40:07.353238 1405068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0815 00:40:07.360318 1405068 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0815 00:40:07.360348 1405068 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0815 00:40:07.363865 1405068 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0815 00:40:07.363891 1405068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0815 00:40:07.407212 1405068 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 00:40:07.407233 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0815 00:40:07.474144 1405068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 00:40:07.499546 1405068 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0815 00:40:07.499572 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0815 00:40:07.510485 1405068 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0815 00:40:07.510516 1405068 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0815 00:40:07.614174 1405068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 00:40:07.618934 1405068 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0815 00:40:07.618959 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0815 00:40:07.664656 1405068 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0815 00:40:07.664690 1405068 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0815 00:40:07.730817 1405068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0815 00:40:07.797601 1405068 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0815 00:40:07.797627 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0815 00:40:07.892037 1405068 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0815 00:40:07.892063 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0815 00:40:07.923596 1405068 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0815 00:40:07.923621 1405068 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0815 00:40:07.985836 1405068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0815 00:40:11.372273 1405068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.910013728s)
	I0815 00:40:11.372399 1405068 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.905236443s)
	I0815 00:40:11.372600 1405068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.821309372s)
	I0815 00:40:11.372415 1405068 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.90534493s)
	I0815 00:40:11.372705 1405068 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0815 00:40:11.374348 1405068 node_ready.go:35] waiting up to 6m0s for node "addons-177998" to be "Ready" ...
	I0815 00:40:12.002430 1405068 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-177998" context rescaled to 1 replicas
	I0815 00:40:13.033882 1405068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.353620146s)
	I0815 00:40:13.034024 1405068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.424526875s)
	I0815 00:40:13.034061 1405068 addons.go:475] Verifying addon ingress=true in "addons-177998"
	I0815 00:40:13.034455 1405068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.348251144s)
	I0815 00:40:13.034529 1405068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.307035746s)
	I0815 00:40:13.034561 1405068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.286796308s)
	I0815 00:40:13.034585 1405068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.828004413s)
	I0815 00:40:13.035025 1405068 addons.go:475] Verifying addon registry=true in "addons-177998"
	I0815 00:40:13.034621 1405068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.681356176s)
	I0815 00:40:13.034671 1405068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.560496631s)
	I0815 00:40:13.036027 1405068 addons.go:475] Verifying addon metrics-server=true in "addons-177998"
	I0815 00:40:13.036361 1405068 out.go:177] * Verifying ingress addon...
	I0815 00:40:13.037837 1405068 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-177998 service yakd-dashboard -n yakd-dashboard
	
	I0815 00:40:13.037890 1405068 out.go:177] * Verifying registry addon...
	I0815 00:40:13.038886 1405068 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0815 00:40:13.041323 1405068 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0815 00:40:13.081035 1405068 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0815 00:40:13.081198 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:13.082596 1405068 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0815 00:40:13.082623 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0815 00:40:13.150967 1405068 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0815 00:40:13.274853 1405068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.6606272s)
	W0815 00:40:13.274903 1405068 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0815 00:40:13.274948 1405068 retry.go:31] will retry after 233.192568ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0815 00:40:13.275031 1405068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.544176958s)
	I0815 00:40:13.381672 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:13.508435 1405068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 00:40:13.552388 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:13.558992 1405068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.573107025s)
	I0815 00:40:13.559091 1405068 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-177998"
	I0815 00:40:13.562316 1405068 out.go:177] * Verifying csi-hostpath-driver addon...
	I0815 00:40:13.565689 1405068 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0815 00:40:13.572079 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:13.661179 1405068 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0815 00:40:13.661263 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:14.045418 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:14.046748 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:14.069428 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:14.544063 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:14.545746 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:14.569743 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:15.047789 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:15.049775 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:15.070343 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:15.552346 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:15.553129 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:15.569913 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:15.878757 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:16.051259 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:16.052960 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:16.072225 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:16.339970 1405068 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0815 00:40:16.340085 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:40:16.378599 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:40:16.448178 1405068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.939650521s)
	I0815 00:40:16.547195 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:16.549116 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:16.573298 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:16.583180 1405068 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0815 00:40:16.601978 1405068 addons.go:234] Setting addon gcp-auth=true in "addons-177998"
	I0815 00:40:16.602079 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:16.602628 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:16.624748 1405068 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0815 00:40:16.624800 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:40:16.644541 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:40:16.748889 1405068 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 00:40:16.751205 1405068 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0815 00:40:16.752782 1405068 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0815 00:40:16.752802 1405068 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0815 00:40:16.782425 1405068 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0815 00:40:16.782454 1405068 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0815 00:40:16.808091 1405068 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0815 00:40:16.808123 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0815 00:40:16.833219 1405068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0815 00:40:17.049220 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:17.051538 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:17.071604 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:17.572142 1405068 addons.go:475] Verifying addon gcp-auth=true in "addons-177998"
	I0815 00:40:17.574681 1405068 out.go:177] * Verifying gcp-auth addon...
	I0815 00:40:17.577473 1405068 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0815 00:40:17.588309 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:17.592067 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:17.598824 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:17.688359 1405068 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0815 00:40:17.688385 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:18.052798 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:18.053500 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:18.071466 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:18.082695 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:18.387551 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:18.545072 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:18.547144 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:18.569650 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:18.581221 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:19.044712 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:19.047313 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:19.070519 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:19.081081 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:19.543613 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:19.545132 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:19.570215 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:19.582926 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:20.045732 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:20.046902 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:20.070478 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:20.083496 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:20.548972 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:20.550352 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:20.570426 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:20.581500 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:20.878907 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:21.044834 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:21.051405 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:21.073649 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:21.081516 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:21.542864 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:21.545503 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:21.569610 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:21.581197 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:22.043694 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:22.045777 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:22.070804 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:22.081421 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:22.544359 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:22.544950 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:22.569297 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:22.581303 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:23.043436 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:23.046336 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:23.069652 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:23.081428 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:23.379841 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:23.544047 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:23.545361 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:23.569388 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:23.580821 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:24.043183 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:24.046309 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:24.069901 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:24.081465 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:24.543494 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:24.546060 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:24.568941 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:24.582493 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:25.043269 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:25.045681 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:25.072050 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:25.080831 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:25.543293 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:25.546020 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:25.569023 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:25.581140 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:25.878064 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:26.045549 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:26.047565 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:26.069509 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:26.080801 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:26.544198 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:26.545070 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:26.569507 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:26.580544 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:27.044718 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:27.047451 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:27.069865 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:27.081244 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:27.545409 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:27.546501 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:27.569706 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:27.581338 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:27.878142 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:28.043909 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:28.045729 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:28.069867 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:28.081016 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:28.543108 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:28.544692 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:28.569258 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:28.580838 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:29.044895 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:29.047817 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:29.069577 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:29.081008 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:29.543836 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:29.545675 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:29.568991 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:29.581239 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:29.878281 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:30.045770 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:30.053561 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:30.075251 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:30.086241 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:30.542828 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:30.544621 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:30.570082 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:30.580561 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:31.043943 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:31.048045 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:31.071410 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:31.081038 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:31.543657 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:31.545011 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:31.569995 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:31.581445 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:31.878580 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:32.046192 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:32.046525 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:32.069891 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:32.080935 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:32.543375 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:32.546426 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:32.570152 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:32.581416 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:33.043218 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:33.046675 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:33.070278 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:33.081100 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:33.543387 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:33.545205 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:33.569233 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:33.581338 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:34.044526 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:34.045807 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:34.069878 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:34.081252 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:34.377559 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:34.543959 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:34.545570 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:34.569538 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:34.581085 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:35.043859 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:35.046427 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:35.069976 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:35.083020 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:35.543627 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:35.544946 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:35.571071 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:35.580635 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:36.045509 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:36.050566 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:36.070686 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:36.083491 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:36.378888 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:36.547052 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:36.550214 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:36.569914 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:36.581113 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:37.043670 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:37.060186 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:37.081817 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:37.088688 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:37.545248 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:37.546210 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:37.569573 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:37.580477 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:38.043554 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:38.045295 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:38.069923 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:38.082228 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:38.544211 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:38.547123 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:38.569422 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:38.580464 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:38.877850 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:39.044637 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:39.046997 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:39.069516 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:39.081441 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:39.544144 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:39.545668 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:39.569322 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:39.581434 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:40.046113 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:40.048072 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:40.069718 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:40.081093 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:40.543578 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:40.545238 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:40.569597 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:40.581175 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:40.878132 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:41.043945 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:41.046288 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:41.069569 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:41.081841 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:41.542667 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:41.545418 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:41.569591 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:41.581334 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:42.044204 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:42.046635 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:42.069456 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:42.082648 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:42.543343 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:42.546144 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:42.569203 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:42.581554 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:43.043860 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:43.045704 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:43.069239 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:43.080648 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:43.378046 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:43.543057 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:43.545571 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:43.569592 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:43.581110 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:44.043241 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:44.045935 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:44.069899 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:44.080977 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:44.543295 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:44.545448 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:44.569521 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:44.580764 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:45.054935 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:45.071278 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:45.090111 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:45.131394 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:45.383017 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:45.543173 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:45.544868 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:45.569744 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:45.581239 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:46.044056 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:46.045662 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:46.069127 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:46.080642 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:46.544175 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:46.545649 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:46.569785 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:46.580842 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:47.043434 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:47.046933 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:47.070084 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:47.080803 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:47.544502 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:47.546142 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:47.569377 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:47.581160 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:47.877977 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:48.044349 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:48.046541 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:48.069742 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:48.081689 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:48.543512 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:48.545005 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:48.569290 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:48.580816 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:49.043433 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:49.045535 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:49.070293 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:49.080998 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:49.543511 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:49.546720 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:49.569654 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:49.581330 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:49.878621 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:50.046504 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:50.047179 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:50.069531 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:50.084077 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:50.545774 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:50.546801 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:50.570061 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:50.580587 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:51.054186 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:51.055558 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:51.070015 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:51.081853 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:51.542910 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:51.544481 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:51.569830 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:51.580919 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:52.044111 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:52.044984 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:52.068999 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:52.081324 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:52.378269 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:52.543519 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:52.545708 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:52.568932 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:52.581233 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:53.046647 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:53.047575 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:53.069833 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:53.080955 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:53.555318 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:53.556974 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:53.618618 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:53.621001 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:53.910808 1405068 node_ready.go:49] node "addons-177998" has status "Ready":"True"
	I0815 00:40:53.910885 1405068 node_ready.go:38] duration metric: took 42.536361271s for node "addons-177998" to be "Ready" ...
	I0815 00:40:53.910910 1405068 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 00:40:53.960183 1405068 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-pdg4h" in "kube-system" namespace to be "Ready" ...
	I0815 00:40:54.055985 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:54.061760 1405068 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0815 00:40:54.061833 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:54.086226 1405068 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0815 00:40:54.086309 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:54.098231 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:54.572156 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:54.573210 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:54.600568 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:54.600986 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:55.065030 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:55.066624 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:55.081546 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:55.083986 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:55.545514 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:55.546069 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:55.570779 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:55.581472 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:55.967821 1405068 pod_ready.go:92] pod "coredns-6f6b679f8f-pdg4h" in "kube-system" namespace has status "Ready":"True"
	I0815 00:40:55.967893 1405068 pod_ready.go:81] duration metric: took 2.007632437s for pod "coredns-6f6b679f8f-pdg4h" in "kube-system" namespace to be "Ready" ...
	I0815 00:40:55.967927 1405068 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-177998" in "kube-system" namespace to be "Ready" ...
	I0815 00:40:55.974166 1405068 pod_ready.go:92] pod "etcd-addons-177998" in "kube-system" namespace has status "Ready":"True"
	I0815 00:40:55.974189 1405068 pod_ready.go:81] duration metric: took 6.254821ms for pod "etcd-addons-177998" in "kube-system" namespace to be "Ready" ...
	I0815 00:40:55.974206 1405068 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-177998" in "kube-system" namespace to be "Ready" ...
	I0815 00:40:55.979850 1405068 pod_ready.go:92] pod "kube-apiserver-addons-177998" in "kube-system" namespace has status "Ready":"True"
	I0815 00:40:55.979948 1405068 pod_ready.go:81] duration metric: took 5.732979ms for pod "kube-apiserver-addons-177998" in "kube-system" namespace to be "Ready" ...
	I0815 00:40:55.979997 1405068 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-177998" in "kube-system" namespace to be "Ready" ...
	I0815 00:40:55.986844 1405068 pod_ready.go:92] pod "kube-controller-manager-addons-177998" in "kube-system" namespace has status "Ready":"True"
	I0815 00:40:55.986876 1405068 pod_ready.go:81] duration metric: took 6.862177ms for pod "kube-controller-manager-addons-177998" in "kube-system" namespace to be "Ready" ...
	I0815 00:40:55.986897 1405068 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5wktb" in "kube-system" namespace to be "Ready" ...
	I0815 00:40:55.992881 1405068 pod_ready.go:92] pod "kube-proxy-5wktb" in "kube-system" namespace has status "Ready":"True"
	I0815 00:40:55.992912 1405068 pod_ready.go:81] duration metric: took 6.004075ms for pod "kube-proxy-5wktb" in "kube-system" namespace to be "Ready" ...
	I0815 00:40:55.992924 1405068 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-177998" in "kube-system" namespace to be "Ready" ...
	I0815 00:40:56.043890 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:56.046892 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:56.071073 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:56.082447 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:56.364413 1405068 pod_ready.go:92] pod "kube-scheduler-addons-177998" in "kube-system" namespace has status "Ready":"True"
	I0815 00:40:56.364443 1405068 pod_ready.go:81] duration metric: took 371.510456ms for pod "kube-scheduler-addons-177998" in "kube-system" namespace to be "Ready" ...
	I0815 00:40:56.364455 1405068 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace to be "Ready" ...
	I0815 00:40:56.545857 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:56.545993 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:56.570094 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:56.580978 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:57.048530 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:57.050161 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:57.071675 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:57.081778 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:57.544860 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:57.547070 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:57.571456 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:57.581162 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:58.044840 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:58.051478 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:58.071099 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:58.081233 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:58.372217 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:40:58.544587 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:58.548136 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:58.572376 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:58.581046 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:59.043624 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:59.047132 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:59.070520 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:59.080831 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:59.544241 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:59.547164 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:59.571007 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:59.581046 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:00.129793 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:00.130775 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:00.132247 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:00.179560 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:00.389903 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:00.550370 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:00.552050 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:00.572716 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:00.592726 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:01.045539 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:01.049681 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:01.071583 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:01.081507 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:01.545596 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:01.546709 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:01.570905 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:01.581314 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:02.045362 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:02.046892 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:02.070984 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:02.081233 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:02.543894 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:02.546851 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:02.570825 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:02.580752 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:02.872205 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:03.068236 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:03.074604 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:03.085648 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:03.088368 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:03.546110 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:03.548758 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:03.581641 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:03.590718 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:04.044897 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:04.045890 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:04.070651 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:04.080990 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:04.553014 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:04.553517 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:04.607367 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:04.607475 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:05.048816 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:05.049043 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:05.087368 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:05.094310 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:05.373032 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:05.554829 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:05.556963 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:05.572651 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:05.586494 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:06.044553 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:06.046812 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:06.071814 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:06.084242 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:06.546345 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:06.549138 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:06.645407 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:06.647536 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:07.052140 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:07.055056 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:07.072309 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:07.082831 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:07.550759 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:07.556443 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:07.574048 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:07.582532 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:07.874664 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:08.045530 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:08.064058 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:08.081583 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:08.095166 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:08.558235 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:08.564800 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:08.581983 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:08.605671 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:09.047368 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:09.056231 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:09.085065 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:09.104080 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:09.548267 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:09.549882 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:09.572315 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:09.583788 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:10.051590 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:10.053859 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:10.072390 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:10.086131 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:10.374489 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:10.546451 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:10.547854 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:10.572053 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:10.582039 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:11.055508 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:11.056530 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:11.071459 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:11.080896 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:11.545096 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:11.546317 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:11.571931 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:11.581631 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:12.045873 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:12.048647 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:12.072588 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:12.084872 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:12.545717 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:12.548896 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:12.575417 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:12.581507 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:12.872662 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:13.048430 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:13.051269 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:13.072255 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:13.082456 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:13.547792 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:13.549633 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:13.572078 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:13.582335 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:14.047103 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:14.047855 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:14.070817 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:14.080537 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:14.543801 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:14.547991 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:14.573258 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:14.582538 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:14.874271 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:15.071762 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:15.075272 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:15.078799 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:15.081997 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:15.547505 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:15.548445 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:15.574291 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:15.584246 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:16.056218 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:16.057799 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:16.157319 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:16.157434 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:16.549609 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:16.550969 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:16.644955 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:16.646861 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:17.047375 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:17.049563 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:17.146994 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:17.147779 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:17.370495 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:17.543493 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:17.546163 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:17.571319 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:17.581680 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:18.046474 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:18.048426 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:18.071192 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:18.081785 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:18.545421 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:18.547090 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:18.570842 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:18.587015 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:19.044515 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:19.048343 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:19.071393 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:19.080556 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:19.371211 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:19.543813 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:19.545047 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:19.571006 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:19.580524 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:20.045335 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:20.046597 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:20.070642 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:20.080981 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:20.543560 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:20.546734 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:20.570330 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:20.582884 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:21.047705 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:21.058073 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:21.071313 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:21.081537 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:21.378616 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:21.546083 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:21.546844 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:21.577534 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:21.583870 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:22.046253 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:22.048272 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:22.072438 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:22.082968 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:22.546643 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:22.547662 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:22.570591 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:22.581058 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:23.046500 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:23.049264 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:23.071292 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:23.081164 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:23.544433 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:23.547021 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:23.571769 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:23.580995 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:23.873207 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:24.050328 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:24.054004 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:24.070858 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:24.085226 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:24.544557 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:24.546975 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:24.571630 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:24.581409 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:25.047809 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:25.049328 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:25.071534 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:25.082063 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:25.545466 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:25.549181 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:25.571973 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:25.581885 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:26.046955 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:26.048545 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:26.081651 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:26.098023 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:26.373130 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:26.543918 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:26.550407 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:26.572203 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:26.581703 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:27.044577 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:27.048015 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:27.072693 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:27.082048 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:27.545872 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:27.548137 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:27.571003 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:27.580825 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:28.045729 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:28.047228 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:28.070734 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:28.080772 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:28.545657 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:28.545833 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:28.570294 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:28.585018 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:28.871574 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:29.047029 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:29.051320 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:29.071659 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:29.081455 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:29.545470 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:29.548029 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:29.571697 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:29.581346 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:30.138301 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:30.141147 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:30.143034 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:30.144753 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:30.544859 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:30.546485 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:30.571351 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:30.581791 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:30.891417 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:31.046838 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:31.047693 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:31.070266 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:31.081087 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:31.544594 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:31.548174 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:31.573073 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:31.581603 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:32.043917 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:32.046666 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:32.070988 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:32.081254 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:32.543761 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:32.545855 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:32.570223 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:32.581465 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:33.047807 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:33.048863 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:33.070593 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:33.081449 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:33.372169 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:33.544481 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:33.547794 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:33.571322 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:33.581629 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:34.044653 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:34.052029 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:34.071300 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:34.081865 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:34.544609 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:34.551649 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:34.571161 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:34.581159 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:35.044582 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:35.047363 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:35.071573 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:35.081515 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:35.372639 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:35.553260 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:35.555709 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:35.575133 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:35.603154 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:36.063634 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:36.066057 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:36.076923 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:36.088907 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:36.544378 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:36.547174 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:36.571310 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:36.581520 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:37.052852 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:37.146297 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:37.146886 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:37.147492 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:37.544776 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:37.556076 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:37.570813 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:37.581248 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:37.896114 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:38.048502 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:38.056100 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:38.073952 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:38.091218 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:38.553604 1405068 kapi.go:107] duration metric: took 1m25.512277354s to wait for kubernetes.io/minikube-addons=registry ...
	I0815 00:41:38.554878 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:38.582709 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:38.584069 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:39.049990 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:39.072005 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:39.083372 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:39.545371 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:39.571560 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:39.581337 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:40.050425 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:40.073042 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:40.083996 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:40.372053 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:40.544067 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:40.571510 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:40.581248 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:41.044468 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:41.071933 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:41.081234 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:41.544559 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:41.570753 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:41.581551 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:42.054210 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:42.082779 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:42.107381 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:42.544328 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:42.571312 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:42.581312 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:42.871263 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:43.047750 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:43.072165 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:43.081883 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:43.545624 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:43.574064 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:43.581965 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:44.044330 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:44.074072 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:44.082444 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:44.546675 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:44.576639 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:44.581310 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:45.045212 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:45.071658 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:45.081801 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:45.378691 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:45.543669 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:45.571379 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:45.581020 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:46.043501 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:46.070655 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:46.081151 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:46.544372 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:46.571583 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:46.581372 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:47.047785 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:47.076651 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:47.087650 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:47.544019 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:47.574289 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:47.584440 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:47.871714 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:48.045120 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:48.071995 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:48.081798 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:48.544469 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:48.571801 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:48.581828 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:49.043960 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:49.145285 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:49.146030 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:49.548079 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:49.576482 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:49.583273 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:49.874059 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:50.047471 1405068 kapi.go:107] duration metric: took 1m37.008578884s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0815 00:41:50.077966 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:50.084418 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:50.572742 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:50.581962 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:51.079181 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:51.171044 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:51.571046 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:51.581620 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:52.071496 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:52.081980 1405068 kapi.go:107] duration metric: took 1m34.504502919s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0815 00:41:52.084211 1405068 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-177998 cluster.
	I0815 00:41:52.086281 1405068 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0815 00:41:52.088272 1405068 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0815 00:41:52.370279 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:52.570607 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:53.072080 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:53.575721 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:54.071621 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:54.371219 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:54.570124 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:55.071984 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:55.571010 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:56.070870 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:56.372648 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:56.571169 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:57.072562 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:57.570736 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:58.070927 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:58.571612 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:58.872397 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:59.071587 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:59.585776 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:42:00.085897 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:42:00.571922 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:42:01.072135 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:42:01.372871 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:42:01.572663 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:42:02.072025 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:42:02.570771 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:42:03.071823 1405068 kapi.go:107] duration metric: took 1m49.506131956s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0815 00:42:03.074203 1405068 out.go:177] * Enabled addons: storage-provisioner, nvidia-device-plugin, cloud-spanner, ingress-dns, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0815 00:42:03.076173 1405068 addons.go:510] duration metric: took 1m57.33619243s for enable addons: enabled=[storage-provisioner nvidia-device-plugin cloud-spanner ingress-dns metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0815 00:42:03.871053 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:42:06.370529 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:42:08.370631 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:42:10.370947 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:42:12.870963 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:42:14.370512 1405068 pod_ready.go:92] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"True"
	I0815 00:42:14.370540 1405068 pod_ready.go:81] duration metric: took 1m18.006077671s for pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace to be "Ready" ...
	I0815 00:42:14.370552 1405068 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-7b7wb" in "kube-system" namespace to be "Ready" ...
	I0815 00:42:14.375899 1405068 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-7b7wb" in "kube-system" namespace has status "Ready":"True"
	I0815 00:42:14.375929 1405068 pod_ready.go:81] duration metric: took 5.369306ms for pod "nvidia-device-plugin-daemonset-7b7wb" in "kube-system" namespace to be "Ready" ...
	I0815 00:42:14.375950 1405068 pod_ready.go:38] duration metric: took 1m20.464996136s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 00:42:14.375967 1405068 api_server.go:52] waiting for apiserver process to appear ...
	I0815 00:42:14.375995 1405068 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 00:42:14.376058 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 00:42:14.429020 1405068 cri.go:89] found id: "dbe184e7f765be9ac66b9ce1c73b10856697e7599288f048bbd6ad3eec3068da"
	I0815 00:42:14.429045 1405068 cri.go:89] found id: ""
	I0815 00:42:14.429053 1405068 logs.go:276] 1 containers: [dbe184e7f765be9ac66b9ce1c73b10856697e7599288f048bbd6ad3eec3068da]
	I0815 00:42:14.429121 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:14.433118 1405068 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 00:42:14.433195 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 00:42:14.475366 1405068 cri.go:89] found id: "500f0254c56c2a8f7f1a170fe107172fb9ab322ff51810ac9f91a88bdfe576b5"
	I0815 00:42:14.475431 1405068 cri.go:89] found id: ""
	I0815 00:42:14.475446 1405068 logs.go:276] 1 containers: [500f0254c56c2a8f7f1a170fe107172fb9ab322ff51810ac9f91a88bdfe576b5]
	I0815 00:42:14.475503 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:14.479185 1405068 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 00:42:14.479258 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 00:42:14.517625 1405068 cri.go:89] found id: "a258c5a63a70feb900b39c806fdb70dfd1dac65dcd46950974a81f78fe6dd70e"
	I0815 00:42:14.517695 1405068 cri.go:89] found id: ""
	I0815 00:42:14.517718 1405068 logs.go:276] 1 containers: [a258c5a63a70feb900b39c806fdb70dfd1dac65dcd46950974a81f78fe6dd70e]
	I0815 00:42:14.517809 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:14.521568 1405068 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 00:42:14.521684 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 00:42:14.561522 1405068 cri.go:89] found id: "b5b63db1c68aad62aae49b33186fdad027076508c87358bb463048e4c65b78e1"
	I0815 00:42:14.561588 1405068 cri.go:89] found id: ""
	I0815 00:42:14.561607 1405068 logs.go:276] 1 containers: [b5b63db1c68aad62aae49b33186fdad027076508c87358bb463048e4c65b78e1]
	I0815 00:42:14.561699 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:14.565517 1405068 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 00:42:14.565636 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 00:42:14.604808 1405068 cri.go:89] found id: "1a5db7b9949210333ac5a2849cfadc4f569ec584aad9d6eca7abf101875a7215"
	I0815 00:42:14.604828 1405068 cri.go:89] found id: ""
	I0815 00:42:14.604836 1405068 logs.go:276] 1 containers: [1a5db7b9949210333ac5a2849cfadc4f569ec584aad9d6eca7abf101875a7215]
	I0815 00:42:14.604901 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:14.608375 1405068 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 00:42:14.608452 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 00:42:14.653579 1405068 cri.go:89] found id: "a7fc9ee7679f5dc3dbab1ba2d6296b173ac67b9478ee292fb4f606b343d35d55"
	I0815 00:42:14.653657 1405068 cri.go:89] found id: ""
	I0815 00:42:14.653680 1405068 logs.go:276] 1 containers: [a7fc9ee7679f5dc3dbab1ba2d6296b173ac67b9478ee292fb4f606b343d35d55]
	I0815 00:42:14.653764 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:14.657325 1405068 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 00:42:14.657405 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 00:42:14.702609 1405068 cri.go:89] found id: "5093e07d1185af4a5a222407ab3a29ff288ce841e90199c05cccc63f47d0fb5c"
	I0815 00:42:14.702672 1405068 cri.go:89] found id: ""
	I0815 00:42:14.702695 1405068 logs.go:276] 1 containers: [5093e07d1185af4a5a222407ab3a29ff288ce841e90199c05cccc63f47d0fb5c]
	I0815 00:42:14.702787 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:14.706509 1405068 logs.go:123] Gathering logs for kubelet ...
	I0815 00:42:14.706572 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0815 00:42:14.759733 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: W0815 00:40:53.597954    1525 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-177998" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-177998' and this object
	W0815 00:42:14.759979 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598006    1525 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	W0815 00:42:14.760167 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: W0815 00:40:53.598059    1525 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-177998" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-177998' and this object
	W0815 00:42:14.760397 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598076    1525 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	W0815 00:42:14.760583 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: W0815 00:40:53.598114    1525 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-177998" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-177998' and this object
	W0815 00:42:14.760811 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598127    1525 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	I0815 00:42:14.799389 1405068 logs.go:123] Gathering logs for dmesg ...
	I0815 00:42:14.799419 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 00:42:14.816992 1405068 logs.go:123] Gathering logs for describe nodes ...
	I0815 00:42:14.817020 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 00:42:15.068107 1405068 logs.go:123] Gathering logs for kube-apiserver [dbe184e7f765be9ac66b9ce1c73b10856697e7599288f048bbd6ad3eec3068da] ...
	I0815 00:42:15.068148 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe184e7f765be9ac66b9ce1c73b10856697e7599288f048bbd6ad3eec3068da"
	I0815 00:42:15.136937 1405068 logs.go:123] Gathering logs for kube-proxy [1a5db7b9949210333ac5a2849cfadc4f569ec584aad9d6eca7abf101875a7215] ...
	I0815 00:42:15.136982 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a5db7b9949210333ac5a2849cfadc4f569ec584aad9d6eca7abf101875a7215"
	I0815 00:42:15.180626 1405068 logs.go:123] Gathering logs for CRI-O ...
	I0815 00:42:15.180656 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 00:42:15.273850 1405068 logs.go:123] Gathering logs for etcd [500f0254c56c2a8f7f1a170fe107172fb9ab322ff51810ac9f91a88bdfe576b5] ...
	I0815 00:42:15.273887 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500f0254c56c2a8f7f1a170fe107172fb9ab322ff51810ac9f91a88bdfe576b5"
	I0815 00:42:15.342202 1405068 logs.go:123] Gathering logs for coredns [a258c5a63a70feb900b39c806fdb70dfd1dac65dcd46950974a81f78fe6dd70e] ...
	I0815 00:42:15.342236 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a258c5a63a70feb900b39c806fdb70dfd1dac65dcd46950974a81f78fe6dd70e"
	I0815 00:42:15.386086 1405068 logs.go:123] Gathering logs for kube-scheduler [b5b63db1c68aad62aae49b33186fdad027076508c87358bb463048e4c65b78e1] ...
	I0815 00:42:15.386120 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5b63db1c68aad62aae49b33186fdad027076508c87358bb463048e4c65b78e1"
	I0815 00:42:15.443093 1405068 logs.go:123] Gathering logs for kube-controller-manager [a7fc9ee7679f5dc3dbab1ba2d6296b173ac67b9478ee292fb4f606b343d35d55] ...
	I0815 00:42:15.443129 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7fc9ee7679f5dc3dbab1ba2d6296b173ac67b9478ee292fb4f606b343d35d55"
	I0815 00:42:15.512302 1405068 logs.go:123] Gathering logs for kindnet [5093e07d1185af4a5a222407ab3a29ff288ce841e90199c05cccc63f47d0fb5c] ...
	I0815 00:42:15.512337 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5093e07d1185af4a5a222407ab3a29ff288ce841e90199c05cccc63f47d0fb5c"
	I0815 00:42:15.562318 1405068 logs.go:123] Gathering logs for container status ...
	I0815 00:42:15.562352 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 00:42:15.614448 1405068 out.go:304] Setting ErrFile to fd 2...
	I0815 00:42:15.614517 1405068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0815 00:42:15.614595 1405068 out.go:239] X Problems detected in kubelet:
	W0815 00:42:15.614634 1405068 out.go:239]   Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598006    1525 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	W0815 00:42:15.614667 1405068 out.go:239]   Aug 15 00:40:53 addons-177998 kubelet[1525]: W0815 00:40:53.598059    1525 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-177998" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-177998' and this object
	W0815 00:42:15.614701 1405068 out.go:239]   Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598076    1525 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	W0815 00:42:15.614736 1405068 out.go:239]   Aug 15 00:40:53 addons-177998 kubelet[1525]: W0815 00:40:53.598114    1525 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-177998" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-177998' and this object
	W0815 00:42:15.614766 1405068 out.go:239]   Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598127    1525 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	I0815 00:42:15.614774 1405068 out.go:304] Setting ErrFile to fd 2...
	I0815 00:42:15.614781 1405068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:42:25.616126 1405068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:42:25.629976 1405068 api_server.go:72] duration metric: took 2m19.890286111s to wait for apiserver process to appear ...
	I0815 00:42:25.630002 1405068 api_server.go:88] waiting for apiserver healthz status ...
	I0815 00:42:25.630039 1405068 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 00:42:25.630100 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 00:42:25.668851 1405068 cri.go:89] found id: "dbe184e7f765be9ac66b9ce1c73b10856697e7599288f048bbd6ad3eec3068da"
	I0815 00:42:25.668871 1405068 cri.go:89] found id: ""
	I0815 00:42:25.668882 1405068 logs.go:276] 1 containers: [dbe184e7f765be9ac66b9ce1c73b10856697e7599288f048bbd6ad3eec3068da]
	I0815 00:42:25.668938 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:25.672472 1405068 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 00:42:25.672546 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 00:42:25.709902 1405068 cri.go:89] found id: "500f0254c56c2a8f7f1a170fe107172fb9ab322ff51810ac9f91a88bdfe576b5"
	I0815 00:42:25.709926 1405068 cri.go:89] found id: ""
	I0815 00:42:25.709934 1405068 logs.go:276] 1 containers: [500f0254c56c2a8f7f1a170fe107172fb9ab322ff51810ac9f91a88bdfe576b5]
	I0815 00:42:25.709993 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:25.713430 1405068 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 00:42:25.713502 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 00:42:25.759482 1405068 cri.go:89] found id: "a258c5a63a70feb900b39c806fdb70dfd1dac65dcd46950974a81f78fe6dd70e"
	I0815 00:42:25.759504 1405068 cri.go:89] found id: ""
	I0815 00:42:25.759522 1405068 logs.go:276] 1 containers: [a258c5a63a70feb900b39c806fdb70dfd1dac65dcd46950974a81f78fe6dd70e]
	I0815 00:42:25.759585 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:25.763155 1405068 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 00:42:25.763229 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 00:42:25.806127 1405068 cri.go:89] found id: "b5b63db1c68aad62aae49b33186fdad027076508c87358bb463048e4c65b78e1"
	I0815 00:42:25.806149 1405068 cri.go:89] found id: ""
	I0815 00:42:25.806157 1405068 logs.go:276] 1 containers: [b5b63db1c68aad62aae49b33186fdad027076508c87358bb463048e4c65b78e1]
	I0815 00:42:25.806211 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:25.812165 1405068 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 00:42:25.812237 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 00:42:25.851063 1405068 cri.go:89] found id: "1a5db7b9949210333ac5a2849cfadc4f569ec584aad9d6eca7abf101875a7215"
	I0815 00:42:25.851085 1405068 cri.go:89] found id: ""
	I0815 00:42:25.851093 1405068 logs.go:276] 1 containers: [1a5db7b9949210333ac5a2849cfadc4f569ec584aad9d6eca7abf101875a7215]
	I0815 00:42:25.851171 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:25.854823 1405068 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 00:42:25.854910 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 00:42:25.899528 1405068 cri.go:89] found id: "a7fc9ee7679f5dc3dbab1ba2d6296b173ac67b9478ee292fb4f606b343d35d55"
	I0815 00:42:25.899547 1405068 cri.go:89] found id: ""
	I0815 00:42:25.899555 1405068 logs.go:276] 1 containers: [a7fc9ee7679f5dc3dbab1ba2d6296b173ac67b9478ee292fb4f606b343d35d55]
	I0815 00:42:25.899618 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:25.903072 1405068 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 00:42:25.903145 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 00:42:25.941256 1405068 cri.go:89] found id: "5093e07d1185af4a5a222407ab3a29ff288ce841e90199c05cccc63f47d0fb5c"
	I0815 00:42:25.941278 1405068 cri.go:89] found id: ""
	I0815 00:42:25.941286 1405068 logs.go:276] 1 containers: [5093e07d1185af4a5a222407ab3a29ff288ce841e90199c05cccc63f47d0fb5c]
	I0815 00:42:25.941343 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:25.944645 1405068 logs.go:123] Gathering logs for coredns [a258c5a63a70feb900b39c806fdb70dfd1dac65dcd46950974a81f78fe6dd70e] ...
	I0815 00:42:25.944671 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a258c5a63a70feb900b39c806fdb70dfd1dac65dcd46950974a81f78fe6dd70e"
	I0815 00:42:25.988222 1405068 logs.go:123] Gathering logs for kube-proxy [1a5db7b9949210333ac5a2849cfadc4f569ec584aad9d6eca7abf101875a7215] ...
	I0815 00:42:25.988250 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a5db7b9949210333ac5a2849cfadc4f569ec584aad9d6eca7abf101875a7215"
	I0815 00:42:26.040158 1405068 logs.go:123] Gathering logs for kindnet [5093e07d1185af4a5a222407ab3a29ff288ce841e90199c05cccc63f47d0fb5c] ...
	I0815 00:42:26.040186 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5093e07d1185af4a5a222407ab3a29ff288ce841e90199c05cccc63f47d0fb5c"
	I0815 00:42:26.098079 1405068 logs.go:123] Gathering logs for CRI-O ...
	I0815 00:42:26.098113 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 00:42:26.197079 1405068 logs.go:123] Gathering logs for container status ...
	I0815 00:42:26.197116 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 00:42:26.245767 1405068 logs.go:123] Gathering logs for dmesg ...
	I0815 00:42:26.245795 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 00:42:26.262543 1405068 logs.go:123] Gathering logs for etcd [500f0254c56c2a8f7f1a170fe107172fb9ab322ff51810ac9f91a88bdfe576b5] ...
	I0815 00:42:26.262571 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500f0254c56c2a8f7f1a170fe107172fb9ab322ff51810ac9f91a88bdfe576b5"
	I0815 00:42:26.325374 1405068 logs.go:123] Gathering logs for kube-apiserver [dbe184e7f765be9ac66b9ce1c73b10856697e7599288f048bbd6ad3eec3068da] ...
	I0815 00:42:26.325409 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe184e7f765be9ac66b9ce1c73b10856697e7599288f048bbd6ad3eec3068da"
	I0815 00:42:26.404734 1405068 logs.go:123] Gathering logs for kube-scheduler [b5b63db1c68aad62aae49b33186fdad027076508c87358bb463048e4c65b78e1] ...
	I0815 00:42:26.404767 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5b63db1c68aad62aae49b33186fdad027076508c87358bb463048e4c65b78e1"
	I0815 00:42:26.456419 1405068 logs.go:123] Gathering logs for kube-controller-manager [a7fc9ee7679f5dc3dbab1ba2d6296b173ac67b9478ee292fb4f606b343d35d55] ...
	I0815 00:42:26.456453 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7fc9ee7679f5dc3dbab1ba2d6296b173ac67b9478ee292fb4f606b343d35d55"
	I0815 00:42:26.531299 1405068 logs.go:123] Gathering logs for kubelet ...
	I0815 00:42:26.531334 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0815 00:42:26.586269 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: W0815 00:40:53.597954    1525 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-177998" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-177998' and this object
	W0815 00:42:26.586549 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598006    1525 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	W0815 00:42:26.586741 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: W0815 00:40:53.598059    1525 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-177998" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-177998' and this object
	W0815 00:42:26.586971 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598076    1525 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	W0815 00:42:26.587160 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: W0815 00:40:53.598114    1525 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-177998" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-177998' and this object
	W0815 00:42:26.587390 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598127    1525 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	I0815 00:42:26.627402 1405068 logs.go:123] Gathering logs for describe nodes ...
	I0815 00:42:26.627434 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 00:42:26.775160 1405068 out.go:304] Setting ErrFile to fd 2...
	I0815 00:42:26.775188 1405068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0815 00:42:26.775263 1405068 out.go:239] X Problems detected in kubelet:
	W0815 00:42:26.775276 1405068 out.go:239]   Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598006    1525 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	W0815 00:42:26.775287 1405068 out.go:239]   Aug 15 00:40:53 addons-177998 kubelet[1525]: W0815 00:40:53.598059    1525 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-177998" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-177998' and this object
	W0815 00:42:26.775308 1405068 out.go:239]   Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598076    1525 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	W0815 00:42:26.775315 1405068 out.go:239]   Aug 15 00:40:53 addons-177998 kubelet[1525]: W0815 00:40:53.598114    1525 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-177998" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-177998' and this object
	W0815 00:42:26.775322 1405068 out.go:239]   Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598127    1525 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	I0815 00:42:26.775335 1405068 out.go:304] Setting ErrFile to fd 2...
	I0815 00:42:26.775341 1405068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:42:36.775911 1405068 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 00:42:36.783827 1405068 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0815 00:42:36.784954 1405068 api_server.go:141] control plane version: v1.31.0
	I0815 00:42:36.784989 1405068 api_server.go:131] duration metric: took 11.154979952s to wait for apiserver health ...
	I0815 00:42:36.784999 1405068 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 00:42:36.785022 1405068 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 00:42:36.785105 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 00:42:36.828315 1405068 cri.go:89] found id: "dbe184e7f765be9ac66b9ce1c73b10856697e7599288f048bbd6ad3eec3068da"
	I0815 00:42:36.828340 1405068 cri.go:89] found id: ""
	I0815 00:42:36.828350 1405068 logs.go:276] 1 containers: [dbe184e7f765be9ac66b9ce1c73b10856697e7599288f048bbd6ad3eec3068da]
	I0815 00:42:36.828406 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:36.832906 1405068 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 00:42:36.832986 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 00:42:36.875435 1405068 cri.go:89] found id: "500f0254c56c2a8f7f1a170fe107172fb9ab322ff51810ac9f91a88bdfe576b5"
	I0815 00:42:36.875455 1405068 cri.go:89] found id: ""
	I0815 00:42:36.875463 1405068 logs.go:276] 1 containers: [500f0254c56c2a8f7f1a170fe107172fb9ab322ff51810ac9f91a88bdfe576b5]
	I0815 00:42:36.875520 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:36.879077 1405068 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 00:42:36.879158 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 00:42:36.919070 1405068 cri.go:89] found id: "a258c5a63a70feb900b39c806fdb70dfd1dac65dcd46950974a81f78fe6dd70e"
	I0815 00:42:36.919093 1405068 cri.go:89] found id: ""
	I0815 00:42:36.919100 1405068 logs.go:276] 1 containers: [a258c5a63a70feb900b39c806fdb70dfd1dac65dcd46950974a81f78fe6dd70e]
	I0815 00:42:36.919158 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:36.922739 1405068 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 00:42:36.922824 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 00:42:36.960870 1405068 cri.go:89] found id: "b5b63db1c68aad62aae49b33186fdad027076508c87358bb463048e4c65b78e1"
	I0815 00:42:36.960893 1405068 cri.go:89] found id: ""
	I0815 00:42:36.960901 1405068 logs.go:276] 1 containers: [b5b63db1c68aad62aae49b33186fdad027076508c87358bb463048e4c65b78e1]
	I0815 00:42:36.960964 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:36.964534 1405068 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 00:42:36.964627 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 00:42:37.015395 1405068 cri.go:89] found id: "1a5db7b9949210333ac5a2849cfadc4f569ec584aad9d6eca7abf101875a7215"
	I0815 00:42:37.015429 1405068 cri.go:89] found id: ""
	I0815 00:42:37.015438 1405068 logs.go:276] 1 containers: [1a5db7b9949210333ac5a2849cfadc4f569ec584aad9d6eca7abf101875a7215]
	I0815 00:42:37.015512 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:37.020002 1405068 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 00:42:37.020128 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 00:42:37.076474 1405068 cri.go:89] found id: "a7fc9ee7679f5dc3dbab1ba2d6296b173ac67b9478ee292fb4f606b343d35d55"
	I0815 00:42:37.076539 1405068 cri.go:89] found id: ""
	I0815 00:42:37.076554 1405068 logs.go:276] 1 containers: [a7fc9ee7679f5dc3dbab1ba2d6296b173ac67b9478ee292fb4f606b343d35d55]
	I0815 00:42:37.076627 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:37.080229 1405068 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 00:42:37.080328 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 00:42:37.118486 1405068 cri.go:89] found id: "5093e07d1185af4a5a222407ab3a29ff288ce841e90199c05cccc63f47d0fb5c"
	I0815 00:42:37.118508 1405068 cri.go:89] found id: ""
	I0815 00:42:37.118517 1405068 logs.go:276] 1 containers: [5093e07d1185af4a5a222407ab3a29ff288ce841e90199c05cccc63f47d0fb5c]
	I0815 00:42:37.118578 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:37.123361 1405068 logs.go:123] Gathering logs for dmesg ...
	I0815 00:42:37.123388 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 00:42:37.141233 1405068 logs.go:123] Gathering logs for describe nodes ...
	I0815 00:42:37.141262 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 00:42:37.274363 1405068 logs.go:123] Gathering logs for kube-apiserver [dbe184e7f765be9ac66b9ce1c73b10856697e7599288f048bbd6ad3eec3068da] ...
	I0815 00:42:37.274413 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe184e7f765be9ac66b9ce1c73b10856697e7599288f048bbd6ad3eec3068da"
	I0815 00:42:37.329315 1405068 logs.go:123] Gathering logs for etcd [500f0254c56c2a8f7f1a170fe107172fb9ab322ff51810ac9f91a88bdfe576b5] ...
	I0815 00:42:37.329346 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500f0254c56c2a8f7f1a170fe107172fb9ab322ff51810ac9f91a88bdfe576b5"
	I0815 00:42:37.383343 1405068 logs.go:123] Gathering logs for CRI-O ...
	I0815 00:42:37.383374 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 00:42:37.482274 1405068 logs.go:123] Gathering logs for kindnet [5093e07d1185af4a5a222407ab3a29ff288ce841e90199c05cccc63f47d0fb5c] ...
	I0815 00:42:37.482312 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5093e07d1185af4a5a222407ab3a29ff288ce841e90199c05cccc63f47d0fb5c"
	I0815 00:42:37.545254 1405068 logs.go:123] Gathering logs for container status ...
	I0815 00:42:37.545287 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 00:42:37.594933 1405068 logs.go:123] Gathering logs for kubelet ...
	I0815 00:42:37.594965 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0815 00:42:37.645851 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: W0815 00:40:53.597954    1525 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-177998" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-177998' and this object
	W0815 00:42:37.646123 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598006    1525 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	W0815 00:42:37.646313 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: W0815 00:40:53.598059    1525 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-177998" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-177998' and this object
	W0815 00:42:37.646553 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598076    1525 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	W0815 00:42:37.646741 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: W0815 00:40:53.598114    1525 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-177998" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-177998' and this object
	W0815 00:42:37.646968 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598127    1525 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	I0815 00:42:37.688469 1405068 logs.go:123] Gathering logs for coredns [a258c5a63a70feb900b39c806fdb70dfd1dac65dcd46950974a81f78fe6dd70e] ...
	I0815 00:42:37.688499 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a258c5a63a70feb900b39c806fdb70dfd1dac65dcd46950974a81f78fe6dd70e"
	I0815 00:42:37.735065 1405068 logs.go:123] Gathering logs for kube-scheduler [b5b63db1c68aad62aae49b33186fdad027076508c87358bb463048e4c65b78e1] ...
	I0815 00:42:37.735135 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5b63db1c68aad62aae49b33186fdad027076508c87358bb463048e4c65b78e1"
	I0815 00:42:37.788975 1405068 logs.go:123] Gathering logs for kube-proxy [1a5db7b9949210333ac5a2849cfadc4f569ec584aad9d6eca7abf101875a7215] ...
	I0815 00:42:37.789012 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a5db7b9949210333ac5a2849cfadc4f569ec584aad9d6eca7abf101875a7215"
	I0815 00:42:37.826310 1405068 logs.go:123] Gathering logs for kube-controller-manager [a7fc9ee7679f5dc3dbab1ba2d6296b173ac67b9478ee292fb4f606b343d35d55] ...
	I0815 00:42:37.826344 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7fc9ee7679f5dc3dbab1ba2d6296b173ac67b9478ee292fb4f606b343d35d55"
	I0815 00:42:37.900627 1405068 out.go:304] Setting ErrFile to fd 2...
	I0815 00:42:37.900660 1405068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0815 00:42:37.900723 1405068 out.go:239] X Problems detected in kubelet:
	W0815 00:42:37.900737 1405068 out.go:239]   Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598006    1525 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	W0815 00:42:37.900751 1405068 out.go:239]   Aug 15 00:40:53 addons-177998 kubelet[1525]: W0815 00:40:53.598059    1525 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-177998" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-177998' and this object
	W0815 00:42:37.900759 1405068 out.go:239]   Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598076    1525 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	W0815 00:42:37.900770 1405068 out.go:239]   Aug 15 00:40:53 addons-177998 kubelet[1525]: W0815 00:40:53.598114    1525 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-177998" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-177998' and this object
	W0815 00:42:37.900779 1405068 out.go:239]   Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598127    1525 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	I0815 00:42:37.900786 1405068 out.go:304] Setting ErrFile to fd 2...
	I0815 00:42:37.900792 1405068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:42:47.920171 1405068 system_pods.go:59] 18 kube-system pods found
	I0815 00:42:47.920217 1405068 system_pods.go:61] "coredns-6f6b679f8f-pdg4h" [51767a84-0d40-4da1-924b-28e15407b138] Running
	I0815 00:42:47.920225 1405068 system_pods.go:61] "csi-hostpath-attacher-0" [28339244-9d98-4106-9481-245c68b0259c] Running
	I0815 00:42:47.920230 1405068 system_pods.go:61] "csi-hostpath-resizer-0" [030b9622-b512-430b-a968-a060d6533161] Running
	I0815 00:42:47.920235 1405068 system_pods.go:61] "csi-hostpathplugin-b9g9b" [d4802ea3-64b4-40db-8a57-c4ab43810472] Running
	I0815 00:42:47.920240 1405068 system_pods.go:61] "etcd-addons-177998" [29e3ecb7-e391-4c97-9e64-75907dddb196] Running
	I0815 00:42:47.920244 1405068 system_pods.go:61] "kindnet-slrd6" [420c6f3b-f588-4914-ad0c-5bedb94fb3e4] Running
	I0815 00:42:47.920250 1405068 system_pods.go:61] "kube-apiserver-addons-177998" [5e00f426-e2d4-459d-b4b3-0b3fc3009131] Running
	I0815 00:42:47.920255 1405068 system_pods.go:61] "kube-controller-manager-addons-177998" [1a764421-c392-4ef5-82f1-acee0a48e083] Running
	I0815 00:42:47.920260 1405068 system_pods.go:61] "kube-ingress-dns-minikube" [024edd96-4c4b-4440-a323-a9f32fe96019] Running
	I0815 00:42:47.920269 1405068 system_pods.go:61] "kube-proxy-5wktb" [7f98e909-5af9-4423-a14f-33f1ff0a5a08] Running
	I0815 00:42:47.920274 1405068 system_pods.go:61] "kube-scheduler-addons-177998" [ba4ed04a-90b5-4852-85fe-d7cf246020bc] Running
	I0815 00:42:47.920282 1405068 system_pods.go:61] "metrics-server-8988944d9-rf2fb" [727c86c4-3855-401b-98e3-b3bc46d8e36a] Running
	I0815 00:42:47.920287 1405068 system_pods.go:61] "nvidia-device-plugin-daemonset-7b7wb" [83483a1f-e9b5-416a-922d-45fe573a70cc] Running
	I0815 00:42:47.920293 1405068 system_pods.go:61] "registry-6fb4cdfc84-pjk6z" [8d5b9336-317e-46bc-aca7-c582ff9a713b] Running
	I0815 00:42:47.920297 1405068 system_pods.go:61] "registry-proxy-mhl5f" [ffcca5c8-f85a-422d-ae88-317ee7017802] Running
	I0815 00:42:47.920312 1405068 system_pods.go:61] "snapshot-controller-56fcc65765-5gn92" [960ee139-dbed-4d66-840e-a8e0e55578e3] Running
	I0815 00:42:47.920316 1405068 system_pods.go:61] "snapshot-controller-56fcc65765-fnpns" [0552d1c5-19ff-4f08-97d8-f16f0b6ff21f] Running
	I0815 00:42:47.920319 1405068 system_pods.go:61] "storage-provisioner" [c9d10c3f-3886-4a97-a23a-c59849cd617f] Running
	I0815 00:42:47.920326 1405068 system_pods.go:74] duration metric: took 11.135320218s to wait for pod list to return data ...
	I0815 00:42:47.920334 1405068 default_sa.go:34] waiting for default service account to be created ...
	I0815 00:42:47.922926 1405068 default_sa.go:45] found service account: "default"
	I0815 00:42:47.922953 1405068 default_sa.go:55] duration metric: took 2.609696ms for default service account to be created ...
	I0815 00:42:47.922962 1405068 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 00:42:47.933567 1405068 system_pods.go:86] 18 kube-system pods found
	I0815 00:42:47.933603 1405068 system_pods.go:89] "coredns-6f6b679f8f-pdg4h" [51767a84-0d40-4da1-924b-28e15407b138] Running
	I0815 00:42:47.933610 1405068 system_pods.go:89] "csi-hostpath-attacher-0" [28339244-9d98-4106-9481-245c68b0259c] Running
	I0815 00:42:47.933638 1405068 system_pods.go:89] "csi-hostpath-resizer-0" [030b9622-b512-430b-a968-a060d6533161] Running
	I0815 00:42:47.933649 1405068 system_pods.go:89] "csi-hostpathplugin-b9g9b" [d4802ea3-64b4-40db-8a57-c4ab43810472] Running
	I0815 00:42:47.933654 1405068 system_pods.go:89] "etcd-addons-177998" [29e3ecb7-e391-4c97-9e64-75907dddb196] Running
	I0815 00:42:47.933659 1405068 system_pods.go:89] "kindnet-slrd6" [420c6f3b-f588-4914-ad0c-5bedb94fb3e4] Running
	I0815 00:42:47.933668 1405068 system_pods.go:89] "kube-apiserver-addons-177998" [5e00f426-e2d4-459d-b4b3-0b3fc3009131] Running
	I0815 00:42:47.933673 1405068 system_pods.go:89] "kube-controller-manager-addons-177998" [1a764421-c392-4ef5-82f1-acee0a48e083] Running
	I0815 00:42:47.933678 1405068 system_pods.go:89] "kube-ingress-dns-minikube" [024edd96-4c4b-4440-a323-a9f32fe96019] Running
	I0815 00:42:47.933689 1405068 system_pods.go:89] "kube-proxy-5wktb" [7f98e909-5af9-4423-a14f-33f1ff0a5a08] Running
	I0815 00:42:47.933693 1405068 system_pods.go:89] "kube-scheduler-addons-177998" [ba4ed04a-90b5-4852-85fe-d7cf246020bc] Running
	I0815 00:42:47.933697 1405068 system_pods.go:89] "metrics-server-8988944d9-rf2fb" [727c86c4-3855-401b-98e3-b3bc46d8e36a] Running
	I0815 00:42:47.933728 1405068 system_pods.go:89] "nvidia-device-plugin-daemonset-7b7wb" [83483a1f-e9b5-416a-922d-45fe573a70cc] Running
	I0815 00:42:47.933776 1405068 system_pods.go:89] "registry-6fb4cdfc84-pjk6z" [8d5b9336-317e-46bc-aca7-c582ff9a713b] Running
	I0815 00:42:47.933780 1405068 system_pods.go:89] "registry-proxy-mhl5f" [ffcca5c8-f85a-422d-ae88-317ee7017802] Running
	I0815 00:42:47.933784 1405068 system_pods.go:89] "snapshot-controller-56fcc65765-5gn92" [960ee139-dbed-4d66-840e-a8e0e55578e3] Running
	I0815 00:42:47.933788 1405068 system_pods.go:89] "snapshot-controller-56fcc65765-fnpns" [0552d1c5-19ff-4f08-97d8-f16f0b6ff21f] Running
	I0815 00:42:47.933792 1405068 system_pods.go:89] "storage-provisioner" [c9d10c3f-3886-4a97-a23a-c59849cd617f] Running
	I0815 00:42:47.933812 1405068 system_pods.go:126] duration metric: took 10.844228ms to wait for k8s-apps to be running ...
	I0815 00:42:47.933826 1405068 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 00:42:47.933898 1405068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:42:47.946529 1405068 system_svc.go:56] duration metric: took 12.693281ms WaitForService to wait for kubelet
	I0815 00:42:47.946555 1405068 kubeadm.go:582] duration metric: took 2m42.206870888s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 00:42:47.946576 1405068 node_conditions.go:102] verifying NodePressure condition ...
	I0815 00:42:47.950166 1405068 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0815 00:42:47.950202 1405068 node_conditions.go:123] node cpu capacity is 2
	I0815 00:42:47.950214 1405068 node_conditions.go:105] duration metric: took 3.633114ms to run NodePressure ...
	I0815 00:42:47.950244 1405068 start.go:241] waiting for startup goroutines ...
	I0815 00:42:47.950262 1405068 start.go:246] waiting for cluster config update ...
	I0815 00:42:47.950278 1405068 start.go:255] writing updated cluster config ...
	I0815 00:42:47.950608 1405068 ssh_runner.go:195] Run: rm -f paused
	I0815 00:42:48.300155 1405068 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 00:42:48.302533 1405068 out.go:177] * Done! kubectl is now configured to use "addons-177998" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 15 00:46:43 addons-177998 crio[967]: time="2024-08-15 00:46:43.326575996Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=00a3b6a3-a74d-4b6a-84f5-581dc7cbd7ec name=/runtime.v1.ImageService/ImageStatus
	Aug 15 00:46:43 addons-177998 crio[967]: time="2024-08-15 00:46:43.327457031Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=85dcd373-5f18-40fc-b303-4b38d051f73b name=/runtime.v1.ImageService/ImageStatus
	Aug 15 00:46:43 addons-177998 crio[967]: time="2024-08-15 00:46:43.328050011Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=85dcd373-5f18-40fc-b303-4b38d051f73b name=/runtime.v1.ImageService/ImageStatus
	Aug 15 00:46:43 addons-177998 crio[967]: time="2024-08-15 00:46:43.329470448Z" level=info msg="Creating container: default/hello-world-app-55bf9c44b4-nh2g8/hello-world-app" id=a354a6b5-4f43-453f-9028-7ef4ca7038c1 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 15 00:46:43 addons-177998 crio[967]: time="2024-08-15 00:46:43.329565225Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 15 00:46:43 addons-177998 crio[967]: time="2024-08-15 00:46:43.351509459Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c6a6badc50e46f3152d8ffd455210fd02bfaaf06e1acae42ced214678078e5fd/merged/etc/passwd: no such file or directory"
	Aug 15 00:46:43 addons-177998 crio[967]: time="2024-08-15 00:46:43.351724022Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c6a6badc50e46f3152d8ffd455210fd02bfaaf06e1acae42ced214678078e5fd/merged/etc/group: no such file or directory"
	Aug 15 00:46:43 addons-177998 crio[967]: time="2024-08-15 00:46:43.422580433Z" level=info msg="Created container a09856c4982d18c3fb4b983d01ff7c72565c2e5ad90d22ac8b1eb944bce5a70d: default/hello-world-app-55bf9c44b4-nh2g8/hello-world-app" id=a354a6b5-4f43-453f-9028-7ef4ca7038c1 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 15 00:46:43 addons-177998 crio[967]: time="2024-08-15 00:46:43.423490876Z" level=info msg="Starting container: a09856c4982d18c3fb4b983d01ff7c72565c2e5ad90d22ac8b1eb944bce5a70d" id=7925db0c-37a6-4821-b0e7-c68cf3276179 name=/runtime.v1.RuntimeService/StartContainer
	Aug 15 00:46:43 addons-177998 crio[967]: time="2024-08-15 00:46:43.434149104Z" level=info msg="Started container" PID=8223 containerID=a09856c4982d18c3fb4b983d01ff7c72565c2e5ad90d22ac8b1eb944bce5a70d description=default/hello-world-app-55bf9c44b4-nh2g8/hello-world-app id=7925db0c-37a6-4821-b0e7-c68cf3276179 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0934939c3006463f1bdff2267ce4eec04f771577548c250bf93312e9ea66507c
	Aug 15 00:46:44 addons-177998 crio[967]: time="2024-08-15 00:46:44.828274038Z" level=info msg="Stopping container: 2766b6290ac18eae3473533dae6d516764ab96bc9d3a06bcbacf949dca43c18c (timeout: 2s)" id=8433a6c2-6ee7-4292-9de7-37cbf89007d0 name=/runtime.v1.RuntimeService/StopContainer
	Aug 15 00:46:46 addons-177998 crio[967]: time="2024-08-15 00:46:46.834707805Z" level=warning msg="Stopping container 2766b6290ac18eae3473533dae6d516764ab96bc9d3a06bcbacf949dca43c18c with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=8433a6c2-6ee7-4292-9de7-37cbf89007d0 name=/runtime.v1.RuntimeService/StopContainer
	Aug 15 00:46:46 addons-177998 conmon[4644]: conmon 2766b6290ac18eae3473 <ninfo>: container 4655 exited with status 137
	Aug 15 00:46:46 addons-177998 crio[967]: time="2024-08-15 00:46:46.979152941Z" level=info msg="Stopped container 2766b6290ac18eae3473533dae6d516764ab96bc9d3a06bcbacf949dca43c18c: ingress-nginx/ingress-nginx-controller-7559cbf597-b7t5l/controller" id=8433a6c2-6ee7-4292-9de7-37cbf89007d0 name=/runtime.v1.RuntimeService/StopContainer
	Aug 15 00:46:46 addons-177998 crio[967]: time="2024-08-15 00:46:46.979711312Z" level=info msg="Stopping pod sandbox: 920723acc74a4845db34e4b0e24f1c1cb1417c82150007ff6f6fdc234bc01513" id=50f2dadb-a394-495b-88c0-e510d82e9178 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 00:46:46 addons-177998 crio[967]: time="2024-08-15 00:46:46.983826293Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-RTJRBZBYZO35UUM3 - [0:0]\n:KUBE-HP-UYF2H6VZA55B5CXP - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-UYF2H6VZA55B5CXP\n-X KUBE-HP-RTJRBZBYZO35UUM3\nCOMMIT\n"
	Aug 15 00:46:46 addons-177998 crio[967]: time="2024-08-15 00:46:46.996023292Z" level=info msg="Closing host port tcp:80"
	Aug 15 00:46:46 addons-177998 crio[967]: time="2024-08-15 00:46:46.996078364Z" level=info msg="Closing host port tcp:443"
	Aug 15 00:46:46 addons-177998 crio[967]: time="2024-08-15 00:46:46.997513414Z" level=info msg="Host port tcp:80 does not have an open socket"
	Aug 15 00:46:46 addons-177998 crio[967]: time="2024-08-15 00:46:46.997549590Z" level=info msg="Host port tcp:443 does not have an open socket"
	Aug 15 00:46:46 addons-177998 crio[967]: time="2024-08-15 00:46:46.997735640Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7559cbf597-b7t5l Namespace:ingress-nginx ID:920723acc74a4845db34e4b0e24f1c1cb1417c82150007ff6f6fdc234bc01513 UID:7b601b21-8fc7-4d71-a40d-b9819be8ae08 NetNS:/var/run/netns/dad6d4e7-2b15-4924-9394-1bc69db8cd94 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 15 00:46:46 addons-177998 crio[967]: time="2024-08-15 00:46:46.997871565Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7559cbf597-b7t5l from CNI network \"kindnet\" (type=ptp)"
	Aug 15 00:46:47 addons-177998 crio[967]: time="2024-08-15 00:46:47.027739163Z" level=info msg="Stopped pod sandbox: 920723acc74a4845db34e4b0e24f1c1cb1417c82150007ff6f6fdc234bc01513" id=50f2dadb-a394-495b-88c0-e510d82e9178 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 00:46:47 addons-177998 crio[967]: time="2024-08-15 00:46:47.064794795Z" level=info msg="Removing container: 2766b6290ac18eae3473533dae6d516764ab96bc9d3a06bcbacf949dca43c18c" id=a32ab0a2-601e-44a1-a058-920f509df348 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 15 00:46:47 addons-177998 crio[967]: time="2024-08-15 00:46:47.079560694Z" level=info msg="Removed container 2766b6290ac18eae3473533dae6d516764ab96bc9d3a06bcbacf949dca43c18c: ingress-nginx/ingress-nginx-controller-7559cbf597-b7t5l/controller" id=a32ab0a2-601e-44a1-a058-920f509df348 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a09856c4982d1       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        8 seconds ago       Running             hello-world-app           0                   0934939c30064       hello-world-app-55bf9c44b4-nh2g8
	5a73034614771       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                              2 minutes ago       Running             nginx                     0                   73ef8fbd082c5       nginx
	feaaac1e3b1ae       ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971                        2 minutes ago       Running             headlamp                  0                   fd824314340ff       headlamp-57fb76fcdb-zzp8q
	de6512a94c68d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          4 minutes ago       Running             busybox                   0                   4d00b257068f6       busybox
	1e63db71930ff       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             5 minutes ago       Running             local-path-provisioner    0                   0f4d8a768bf34       local-path-provisioner-86d989889c-28bw5
	539c31dc78280       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   5 minutes ago       Exited              patch                     0                   21bbefa25b849       ingress-nginx-admission-patch-dnr48
	0dd58915670bf       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70        5 minutes ago       Running             metrics-server            0                   ed4ef1e3504ad       metrics-server-8988944d9-rf2fb
	bcea413941280       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   5 minutes ago       Exited              create                    0                   572f2fef0d790       ingress-nginx-admission-create-kgc9j
	a258c5a63a70f       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                             5 minutes ago       Running             coredns                   0                   e01e64c1da527       coredns-6f6b679f8f-pdg4h
	f60049867c20f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             5 minutes ago       Running             storage-provisioner       0                   8435130c9b79f       storage-provisioner
	5093e07d1185a       docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64                           6 minutes ago       Running             kindnet-cni               0                   4fce3aedd839d       kindnet-slrd6
	1a5db7b994921       71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89                                                             6 minutes ago       Running             kube-proxy                0                   3bb2a1c47b619       kube-proxy-5wktb
	500f0254c56c2       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             6 minutes ago       Running             etcd                      0                   5c7bbe52fd00b       etcd-addons-177998
	dbe184e7f765b       cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388                                                             6 minutes ago       Running             kube-apiserver            0                   aea7e54a7181f       kube-apiserver-addons-177998
	b5b63db1c68aa       fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb                                                             6 minutes ago       Running             kube-scheduler            0                   9868558830c36       kube-scheduler-addons-177998
	a7fc9ee7679f5       fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd                                                             6 minutes ago       Running             kube-controller-manager   0                   63a6290e97bea       kube-controller-manager-addons-177998
	
	
	==> coredns [a258c5a63a70feb900b39c806fdb70dfd1dac65dcd46950974a81f78fe6dd70e] <==
	[INFO] 10.244.0.15:41126 - 13082 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002907416s
	[INFO] 10.244.0.15:51728 - 45599 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00017851s
	[INFO] 10.244.0.15:51728 - 43802 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000121222s
	[INFO] 10.244.0.15:33352 - 19452 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000107675s
	[INFO] 10.244.0.15:33352 - 43745 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000210361s
	[INFO] 10.244.0.15:53750 - 21595 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000059569s
	[INFO] 10.244.0.15:53750 - 42073 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00003703s
	[INFO] 10.244.0.15:41220 - 3637 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000046317s
	[INFO] 10.244.0.15:41220 - 58167 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000123954s
	[INFO] 10.244.0.15:34437 - 28038 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001607767s
	[INFO] 10.244.0.15:34437 - 48516 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001817865s
	[INFO] 10.244.0.15:40041 - 32770 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000061144s
	[INFO] 10.244.0.15:40041 - 27649 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000042141s
	[INFO] 10.244.0.20:38247 - 29419 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000236142s
	[INFO] 10.244.0.20:38516 - 44887 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000170394s
	[INFO] 10.244.0.20:46368 - 19249 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000161517s
	[INFO] 10.244.0.20:33110 - 115 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000149086s
	[INFO] 10.244.0.20:36775 - 28463 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000144557s
	[INFO] 10.244.0.20:52840 - 10409 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00010459s
	[INFO] 10.244.0.20:45939 - 58597 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002100718s
	[INFO] 10.244.0.20:57186 - 24916 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001846067s
	[INFO] 10.244.0.20:42101 - 10131 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001080353s
	[INFO] 10.244.0.20:47465 - 6664 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001466015s
	[INFO] 10.244.0.23:37402 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000296236s
	[INFO] 10.244.0.23:49096 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000138256s
	
	
	==> describe nodes <==
	Name:               addons-177998
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-177998
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=addons-177998
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T00_40_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-177998
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 00:39:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-177998
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 00:46:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 00:44:37 +0000   Thu, 15 Aug 2024 00:39:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 00:44:37 +0000   Thu, 15 Aug 2024 00:39:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 00:44:37 +0000   Thu, 15 Aug 2024 00:39:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 00:44:37 +0000   Thu, 15 Aug 2024 00:40:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-177998
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022372Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022372Ki
	  pods:               110
	System Info:
	  Machine ID:                 c3af8476b7464292ae80b608fd543d32
	  System UUID:                0a13cbd1-f040-4763-85c6-5dd9afda65d5
	  Boot ID:                    a45aa34f-c9ce-4e83-8881-7d8273e4eb81
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  default                     hello-world-app-55bf9c44b4-nh2g8           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  headlamp                    headlamp-57fb76fcdb-zzp8q                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	  kube-system                 coredns-6f6b679f8f-pdg4h                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     6m45s
	  kube-system                 etcd-addons-177998                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         6m51s
	  kube-system                 kindnet-slrd6                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m45s
	  kube-system                 kube-apiserver-addons-177998               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m51s
	  kube-system                 kube-controller-manager-addons-177998      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m51s
	  kube-system                 kube-proxy-5wktb                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m45s
	  kube-system                 kube-scheduler-addons-177998               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m51s
	  kube-system                 metrics-server-8988944d9-rf2fb             100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m42s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m41s
	  local-path-storage          local-path-provisioner-86d989889c-28bw5    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m39s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  6m59s (x8 over 6m59s)  kubelet          Node addons-177998 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m59s (x8 over 6m59s)  kubelet          Node addons-177998 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m59s (x7 over 6m59s)  kubelet          Node addons-177998 status is now: NodeHasSufficientPID
	  Normal   Starting                 6m52s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m52s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  6m51s                  kubelet          Node addons-177998 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m51s                  kubelet          Node addons-177998 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m51s                  kubelet          Node addons-177998 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m47s                  node-controller  Node addons-177998 event: Registered Node addons-177998 in Controller
	  Normal   NodeReady                5m59s                  kubelet          Node addons-177998 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug14 22:01] overlayfs: '/var/lib/containers/storage/overlay/l/ZLTOCNGE2IGM6DT7VP2QP7OV3M' not a directory
	[Aug15 00:11] overlayfs: '/var/lib/containers/storage/overlay/l/ZLTOCNGE2IGM6DT7VP2QP7OV3M' not a directory
	[  +0.606282] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [500f0254c56c2a8f7f1a170fe107172fb9ab322ff51810ac9f91a88bdfe576b5] <==
	{"level":"warn","ts":"2024-08-15T00:40:09.412527Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"339.487998ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T00:40:09.412587Z","caller":"traceutil/trace.go:171","msg":"trace[394224514] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions; range_end:; response_count:0; response_revision:369; }","duration":"339.568341ms","start":"2024-08-15T00:40:09.073007Z","end":"2024-08-15T00:40:09.412575Z","steps":["trace[394224514] 'agreement among raft nodes before linearized reading'  (duration: 339.434049ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:40:09.412978Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T00:40:09.072968Z","time spent":"339.994636ms","remote":"127.0.0.1:39626","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":0,"response size":29,"request content":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" limit:1 "}
	{"level":"info","ts":"2024-08-15T00:40:09.549676Z","caller":"traceutil/trace.go:171","msg":"trace[1584218464] transaction","detail":"{read_only:false; response_revision:370; number_of_response:1; }","duration":"242.901065ms","start":"2024-08-15T00:40:09.306756Z","end":"2024-08-15T00:40:09.549657Z","steps":["trace[1584218464] 'process raft request'  (duration: 218.206825ms)","trace[1584218464] 'compare'  (duration: 24.543013ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T00:40:09.563494Z","caller":"traceutil/trace.go:171","msg":"trace[1622755248] transaction","detail":"{read_only:false; response_revision:371; number_of_response:1; }","duration":"250.582814ms","start":"2024-08-15T00:40:09.312890Z","end":"2024-08-15T00:40:09.563473Z","steps":["trace[1622755248] 'process raft request'  (duration: 236.72453ms)","trace[1622755248] 'attach lease to kv pair' {req_type:put; key:/registry/daemonsets/kube-system/kube-proxy; req_size:2860; } (duration: 13.569828ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T00:40:09.564136Z","caller":"traceutil/trace.go:171","msg":"trace[88815966] transaction","detail":"{read_only:false; response_revision:372; number_of_response:1; }","duration":"251.139364ms","start":"2024-08-15T00:40:09.312987Z","end":"2024-08-15T00:40:09.564126Z","steps":["trace[88815966] 'process raft request'  (duration: 250.293339ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:40:09.569059Z","caller":"traceutil/trace.go:171","msg":"trace[1828840549] transaction","detail":"{read_only:false; response_revision:373; number_of_response:1; }","duration":"256.009464ms","start":"2024-08-15T00:40:09.313035Z","end":"2024-08-15T00:40:09.569044Z","steps":["trace[1828840549] 'process raft request'  (duration: 250.898545ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:40:09.569462Z","caller":"traceutil/trace.go:171","msg":"trace[474701671] transaction","detail":"{read_only:false; response_revision:374; number_of_response:1; }","duration":"230.890513ms","start":"2024-08-15T00:40:09.338562Z","end":"2024-08-15T00:40:09.569452Z","steps":["trace[474701671] 'process raft request'  (duration: 225.532957ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:40:09.575394Z","caller":"traceutil/trace.go:171","msg":"trace[1076175795] transaction","detail":"{read_only:false; response_revision:375; number_of_response:1; }","duration":"236.633658ms","start":"2024-08-15T00:40:09.338746Z","end":"2024-08-15T00:40:09.575380Z","steps":["trace[1076175795] 'process raft request'  (duration: 230.66722ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:40:09.575769Z","caller":"traceutil/trace.go:171","msg":"trace[653685191] linearizableReadLoop","detail":"{readStateIndex:384; appliedIndex:378; }","duration":"163.356699ms","start":"2024-08-15T00:40:09.412402Z","end":"2024-08-15T00:40:09.575759Z","steps":["trace[653685191] 'read index received'  (duration: 112.283307ms)","trace[653685191] 'applied index is now lower than readState.Index'  (duration: 51.072506ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T00:40:09.575920Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.378686ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/yakd-dashboard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T00:40:09.575975Z","caller":"traceutil/trace.go:171","msg":"trace[766774977] range","detail":"{range_begin:/registry/namespaces/yakd-dashboard; range_end:; response_count:0; response_revision:375; }","duration":"237.450678ms","start":"2024-08-15T00:40:09.338515Z","end":"2024-08-15T00:40:09.575965Z","steps":["trace[766774977] 'agreement among raft nodes before linearized reading'  (duration: 237.359659ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:40:09.576169Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.668292ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T00:40:09.576235Z","caller":"traceutil/trace.go:171","msg":"trace[1145682486] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:0; response_revision:375; }","duration":"237.723463ms","start":"2024-08-15T00:40:09.338492Z","end":"2024-08-15T00:40:09.576215Z","steps":["trace[1145682486] 'agreement among raft nodes before linearized reading'  (duration: 237.643603ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:40:09.576431Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"269.71557ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-08-15T00:40:09.576500Z","caller":"traceutil/trace.go:171","msg":"trace[85294301] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:375; }","duration":"269.785845ms","start":"2024-08-15T00:40:09.306706Z","end":"2024-08-15T00:40:09.576491Z","steps":["trace[85294301] 'agreement among raft nodes before linearized reading'  (duration: 269.692866ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:40:09.576669Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"389.060986ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T00:40:09.576734Z","caller":"traceutil/trace.go:171","msg":"trace[1742780218] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/minikube-ingress-dns; range_end:; response_count:0; response_revision:375; }","duration":"389.118873ms","start":"2024-08-15T00:40:09.187597Z","end":"2024-08-15T00:40:09.576716Z","steps":["trace[1742780218] 'agreement among raft nodes before linearized reading'  (duration: 389.032998ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:40:09.576780Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T00:40:09.187581Z","time spent":"389.191093ms","remote":"127.0.0.1:39796","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":0,"response size":29,"request content":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" "}
	{"level":"warn","ts":"2024-08-15T00:40:09.587749Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"400.334698ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-slrd6\" ","response":"range_response_count:1 size:3689"}
	{"level":"info","ts":"2024-08-15T00:40:09.587904Z","caller":"traceutil/trace.go:171","msg":"trace[367980808] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-slrd6; range_end:; response_count:1; response_revision:375; }","duration":"400.49935ms","start":"2024-08-15T00:40:09.187390Z","end":"2024-08-15T00:40:09.587890Z","steps":["trace[367980808] 'agreement among raft nodes before linearized reading'  (duration: 400.25835ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:40:09.587985Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T00:40:09.187352Z","time spent":"400.623336ms","remote":"127.0.0.1:39766","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":1,"response size":3713,"request content":"key:\"/registry/pods/kube-system/kindnet-slrd6\" "}
	{"level":"warn","ts":"2024-08-15T00:40:09.588818Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"515.756501ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T00:40:09.588951Z","caller":"traceutil/trace.go:171","msg":"trace[1304732980] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:375; }","duration":"515.893206ms","start":"2024-08-15T00:40:09.073046Z","end":"2024-08-15T00:40:09.588939Z","steps":["trace[1304732980] 'agreement among raft nodes before linearized reading'  (duration: 515.717158ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:40:09.589019Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T00:40:09.073035Z","time spent":"515.963244ms","remote":"127.0.0.1:39796","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":29,"request content":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" "}
	
	
	==> kernel <==
	 00:46:52 up  9:29,  0 users,  load average: 0.20, 1.29, 2.06
	Linux addons-177998 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [5093e07d1185af4a5a222407ab3a29ff288ce841e90199c05cccc63f47d0fb5c] <==
	I0815 00:45:43.040053       1 main.go:299] handling current node
	W0815 00:45:47.090132       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0815 00:45:47.090168       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0815 00:45:53.039973       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:45:53.040008       1 main.go:299] handling current node
	W0815 00:45:59.506924       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 00:45:59.506962       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0815 00:46:00.461385       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0815 00:46:00.461416       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0815 00:46:03.039761       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:46:03.039804       1 main.go:299] handling current node
	I0815 00:46:13.039741       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:46:13.039864       1 main.go:299] handling current node
	I0815 00:46:23.039681       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:46:23.039718       1 main.go:299] handling current node
	W0815 00:46:31.807607       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0815 00:46:31.807645       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0815 00:46:33.040127       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:46:33.040165       1 main.go:299] handling current node
	W0815 00:46:33.450715       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0815 00:46:33.450746       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0815 00:46:37.154449       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 00:46:37.154482       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0815 00:46:43.042919       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:46:43.043032       1 main.go:299] handling current node
	
	
	==> kube-apiserver [dbe184e7f765be9ac66b9ce1c73b10856697e7599288f048bbd6ad3eec3068da] <==
	I0815 00:42:13.967783       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0815 00:42:13.977827       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0815 00:42:56.998903       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38598: use of closed network connection
	E0815 00:42:57.258759       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38630: use of closed network connection
	E0815 00:42:57.390493       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38658: use of closed network connection
	I0815 00:43:25.436324       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0815 00:43:58.975116       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.236.47"}
	I0815 00:44:04.212452       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 00:44:04.220622       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 00:44:04.244233       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 00:44:04.244367       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 00:44:04.253615       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 00:44:04.253672       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 00:44:04.262919       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 00:44:04.263034       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 00:44:04.317660       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 00:44:04.317714       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0815 00:44:05.253805       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0815 00:44:05.318428       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0815 00:44:05.415613       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0815 00:44:16.847205       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0815 00:44:17.884266       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0815 00:44:22.438678       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0815 00:44:22.744279       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.182.112"}
	I0815 00:46:41.777892       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.72.163"}
	
	
	==> kube-controller-manager [a7fc9ee7679f5dc3dbab1ba2d6296b173ac67b9478ee292fb4f606b343d35d55] <==
	W0815 00:45:33.518663       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:45:33.518705       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:45:37.124509       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:45:37.124555       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:45:46.273986       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:45:46.274035       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:46:05.953937       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:46:05.953986       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:46:08.594786       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:46:08.594832       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:46:23.808006       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:46:23.808049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:46:40.524545       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:46:40.524587       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0815 00:46:41.541878       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="38.527228ms"
	I0815 00:46:41.549466       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="7.532138ms"
	I0815 00:46:41.549557       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="36.324µs"
	I0815 00:46:41.564629       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="41.517µs"
	W0815 00:46:42.055065       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:46:42.055119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0815 00:46:43.792714       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0815 00:46:43.799310       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0815 00:46:43.801103       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7559cbf597" duration="8.205µs"
	I0815 00:46:44.086473       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="10.924885ms"
	I0815 00:46:44.087760       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="31.877µs"
	
	
	==> kube-proxy [1a5db7b9949210333ac5a2849cfadc4f569ec584aad9d6eca7abf101875a7215] <==
	I0815 00:40:11.471016       1 server_linux.go:66] "Using iptables proxy"
	I0815 00:40:12.610931       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0815 00:40:12.611071       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 00:40:12.770544       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0815 00:40:12.770740       1 server_linux.go:169] "Using iptables Proxier"
	I0815 00:40:12.806111       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 00:40:12.806794       1 server.go:483] "Version info" version="v1.31.0"
	I0815 00:40:12.841767       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 00:40:12.914507       1 config.go:197] "Starting service config controller"
	I0815 00:40:12.914548       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 00:40:12.914573       1 config.go:104] "Starting endpoint slice config controller"
	I0815 00:40:12.914578       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 00:40:12.915044       1 config.go:326] "Starting node config controller"
	I0815 00:40:12.915063       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 00:40:13.050730       1 shared_informer.go:320] Caches are synced for node config
	I0815 00:40:13.051674       1 shared_informer.go:320] Caches are synced for service config
	I0815 00:40:13.051698       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b5b63db1c68aad62aae49b33186fdad027076508c87358bb463048e4c65b78e1] <==
	W0815 00:39:58.145746       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 00:39:58.145810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:39:58.145928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0815 00:39:58.145983       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 00:39:58.146080       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 00:39:58.146131       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:39:58.146250       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0815 00:39:58.146444       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:39:58.146320       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 00:39:58.146619       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:39:58.146588       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0815 00:39:58.146733       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:39:59.067754       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0815 00:39:59.067890       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:39:59.078935       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0815 00:39:59.079080       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 00:39:59.230101       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 00:39:59.230150       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:39:59.253034       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0815 00:39:59.253175       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:39:59.320178       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 00:39:59.320312       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:39:59.539731       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 00:39:59.539778       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0815 00:40:01.931847       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 00:46:41 addons-177998 kubelet[1525]: I0815 00:46:41.566166    1525 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vngzv\" (UniqueName: \"kubernetes.io/projected/74cafcbe-2866-43b2-acb9-78c53d89c485-kube-api-access-vngzv\") pod \"hello-world-app-55bf9c44b4-nh2g8\" (UID: \"74cafcbe-2866-43b2-acb9-78c53d89c485\") " pod="default/hello-world-app-55bf9c44b4-nh2g8"
	Aug 15 00:46:42 addons-177998 kubelet[1525]: I0815 00:46:42.979782    1525 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqqpr\" (UniqueName: \"kubernetes.io/projected/024edd96-4c4b-4440-a323-a9f32fe96019-kube-api-access-sqqpr\") pod \"024edd96-4c4b-4440-a323-a9f32fe96019\" (UID: \"024edd96-4c4b-4440-a323-a9f32fe96019\") "
	Aug 15 00:46:42 addons-177998 kubelet[1525]: I0815 00:46:42.981760    1525 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/024edd96-4c4b-4440-a323-a9f32fe96019-kube-api-access-sqqpr" (OuterVolumeSpecName: "kube-api-access-sqqpr") pod "024edd96-4c4b-4440-a323-a9f32fe96019" (UID: "024edd96-4c4b-4440-a323-a9f32fe96019"). InnerVolumeSpecName "kube-api-access-sqqpr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 15 00:46:43 addons-177998 kubelet[1525]: I0815 00:46:43.048979    1525 scope.go:117] "RemoveContainer" containerID="8029afc804040ebe7b2e1b860f9894526869da9a0665b7dfc74f21420d8d21e0"
	Aug 15 00:46:43 addons-177998 kubelet[1525]: I0815 00:46:43.078826    1525 scope.go:117] "RemoveContainer" containerID="8029afc804040ebe7b2e1b860f9894526869da9a0665b7dfc74f21420d8d21e0"
	Aug 15 00:46:43 addons-177998 kubelet[1525]: E0815 00:46:43.079248    1525 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8029afc804040ebe7b2e1b860f9894526869da9a0665b7dfc74f21420d8d21e0\": container with ID starting with 8029afc804040ebe7b2e1b860f9894526869da9a0665b7dfc74f21420d8d21e0 not found: ID does not exist" containerID="8029afc804040ebe7b2e1b860f9894526869da9a0665b7dfc74f21420d8d21e0"
	Aug 15 00:46:43 addons-177998 kubelet[1525]: I0815 00:46:43.079289    1525 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8029afc804040ebe7b2e1b860f9894526869da9a0665b7dfc74f21420d8d21e0"} err="failed to get container status \"8029afc804040ebe7b2e1b860f9894526869da9a0665b7dfc74f21420d8d21e0\": rpc error: code = NotFound desc = could not find container \"8029afc804040ebe7b2e1b860f9894526869da9a0665b7dfc74f21420d8d21e0\": container with ID starting with 8029afc804040ebe7b2e1b860f9894526869da9a0665b7dfc74f21420d8d21e0 not found: ID does not exist"
	Aug 15 00:46:43 addons-177998 kubelet[1525]: I0815 00:46:43.080598    1525 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-sqqpr\" (UniqueName: \"kubernetes.io/projected/024edd96-4c4b-4440-a323-a9f32fe96019-kube-api-access-sqqpr\") on node \"addons-177998\" DevicePath \"\""
	Aug 15 00:46:45 addons-177998 kubelet[1525]: I0815 00:46:45.044473    1525 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 15 00:46:45 addons-177998 kubelet[1525]: I0815 00:46:45.059643    1525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="024edd96-4c4b-4440-a323-a9f32fe96019" path="/var/lib/kubelet/pods/024edd96-4c4b-4440-a323-a9f32fe96019/volumes"
	Aug 15 00:46:45 addons-177998 kubelet[1525]: I0815 00:46:45.060075    1525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94801fc2-2a70-4d00-a385-8061da0eda18" path="/var/lib/kubelet/pods/94801fc2-2a70-4d00-a385-8061da0eda18/volumes"
	Aug 15 00:46:45 addons-177998 kubelet[1525]: I0815 00:46:45.066441    1525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aca52fbe-cc18-4a1a-b6f4-b91339307633" path="/var/lib/kubelet/pods/aca52fbe-cc18-4a1a-b6f4-b91339307633/volumes"
	Aug 15 00:46:47 addons-177998 kubelet[1525]: I0815 00:46:47.063369    1525 scope.go:117] "RemoveContainer" containerID="2766b6290ac18eae3473533dae6d516764ab96bc9d3a06bcbacf949dca43c18c"
	Aug 15 00:46:47 addons-177998 kubelet[1525]: I0815 00:46:47.079897    1525 scope.go:117] "RemoveContainer" containerID="2766b6290ac18eae3473533dae6d516764ab96bc9d3a06bcbacf949dca43c18c"
	Aug 15 00:46:47 addons-177998 kubelet[1525]: E0815 00:46:47.080289    1525 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2766b6290ac18eae3473533dae6d516764ab96bc9d3a06bcbacf949dca43c18c\": container with ID starting with 2766b6290ac18eae3473533dae6d516764ab96bc9d3a06bcbacf949dca43c18c not found: ID does not exist" containerID="2766b6290ac18eae3473533dae6d516764ab96bc9d3a06bcbacf949dca43c18c"
	Aug 15 00:46:47 addons-177998 kubelet[1525]: I0815 00:46:47.080326    1525 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2766b6290ac18eae3473533dae6d516764ab96bc9d3a06bcbacf949dca43c18c"} err="failed to get container status \"2766b6290ac18eae3473533dae6d516764ab96bc9d3a06bcbacf949dca43c18c\": rpc error: code = NotFound desc = could not find container \"2766b6290ac18eae3473533dae6d516764ab96bc9d3a06bcbacf949dca43c18c\": container with ID starting with 2766b6290ac18eae3473533dae6d516764ab96bc9d3a06bcbacf949dca43c18c not found: ID does not exist"
	Aug 15 00:46:47 addons-177998 kubelet[1525]: I0815 00:46:47.209935    1525 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlj58\" (UniqueName: \"kubernetes.io/projected/7b601b21-8fc7-4d71-a40d-b9819be8ae08-kube-api-access-jlj58\") pod \"7b601b21-8fc7-4d71-a40d-b9819be8ae08\" (UID: \"7b601b21-8fc7-4d71-a40d-b9819be8ae08\") "
	Aug 15 00:46:47 addons-177998 kubelet[1525]: I0815 00:46:47.209994    1525 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b601b21-8fc7-4d71-a40d-b9819be8ae08-webhook-cert\") pod \"7b601b21-8fc7-4d71-a40d-b9819be8ae08\" (UID: \"7b601b21-8fc7-4d71-a40d-b9819be8ae08\") "
	Aug 15 00:46:47 addons-177998 kubelet[1525]: I0815 00:46:47.212217    1525 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b601b21-8fc7-4d71-a40d-b9819be8ae08-kube-api-access-jlj58" (OuterVolumeSpecName: "kube-api-access-jlj58") pod "7b601b21-8fc7-4d71-a40d-b9819be8ae08" (UID: "7b601b21-8fc7-4d71-a40d-b9819be8ae08"). InnerVolumeSpecName "kube-api-access-jlj58". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 15 00:46:47 addons-177998 kubelet[1525]: I0815 00:46:47.215510    1525 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b601b21-8fc7-4d71-a40d-b9819be8ae08-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7b601b21-8fc7-4d71-a40d-b9819be8ae08" (UID: "7b601b21-8fc7-4d71-a40d-b9819be8ae08"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 15 00:46:47 addons-177998 kubelet[1525]: I0815 00:46:47.310571    1525 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jlj58\" (UniqueName: \"kubernetes.io/projected/7b601b21-8fc7-4d71-a40d-b9819be8ae08-kube-api-access-jlj58\") on node \"addons-177998\" DevicePath \"\""
	Aug 15 00:46:47 addons-177998 kubelet[1525]: I0815 00:46:47.310614    1525 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b601b21-8fc7-4d71-a40d-b9819be8ae08-webhook-cert\") on node \"addons-177998\" DevicePath \"\""
	Aug 15 00:46:49 addons-177998 kubelet[1525]: I0815 00:46:49.045938    1525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b601b21-8fc7-4d71-a40d-b9819be8ae08" path="/var/lib/kubelet/pods/7b601b21-8fc7-4d71-a40d-b9819be8ae08/volumes"
	Aug 15 00:46:51 addons-177998 kubelet[1525]: E0815 00:46:51.214721    1525 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682811214074034,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593922,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:46:51 addons-177998 kubelet[1525]: E0815 00:46:51.214752    1525 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682811214074034,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593922,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [f60049867c20fc8cd5364cc07d6d0f67681b24ba063cc8b376da62f90ee2ddfb] <==
	I0815 00:40:54.539346       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 00:40:54.583782       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 00:40:54.584495       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 00:40:54.619934       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 00:40:54.623598       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-177998_55a21e83-1fac-4f1b-8c2e-e17c900684f0!
	I0815 00:40:54.620565       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"252a22bf-3495-4753-83b0-01175625f944", APIVersion:"v1", ResourceVersion:"939", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-177998_55a21e83-1fac-4f1b-8c2e-e17c900684f0 became leader
	I0815 00:40:54.724300       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-177998_55a21e83-1fac-4f1b-8c2e-e17c900684f0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-177998 -n addons-177998
helpers_test.go:261: (dbg) Run:  kubectl --context addons-177998 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (151.09s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (348.41s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 7.097498ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-rf2fb" [727c86c4-3855-401b-98e3-b3bc46d8e36a] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003336866s
addons_test.go:417: (dbg) Run:  kubectl --context addons-177998 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-177998 top pods -n kube-system: exit status 1 (102.466211ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-pdg4h, age: 4m3.802531841s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-177998 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-177998 top pods -n kube-system: exit status 1 (86.889628ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-pdg4h, age: 4m5.614689215s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-177998 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-177998 top pods -n kube-system: exit status 1 (105.413866ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-pdg4h, age: 4m10.976720079s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-177998 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-177998 top pods -n kube-system: exit status 1 (163.228909ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-pdg4h, age: 4m17.265284343s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-177998 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-177998 top pods -n kube-system: exit status 1 (100.680895ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-pdg4h, age: 4m30.600999914s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-177998 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-177998 top pods -n kube-system: exit status 1 (93.074216ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-pdg4h, age: 4m39.015817393s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-177998 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-177998 top pods -n kube-system: exit status 1 (90.9696ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-pdg4h, age: 5m8.631279863s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-177998 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-177998 top pods -n kube-system: exit status 1 (94.001833ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-pdg4h, age: 5m42.522573064s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-177998 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-177998 top pods -n kube-system: exit status 1 (105.948537ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-pdg4h, age: 6m35.553337726s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-177998 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-177998 top pods -n kube-system: exit status 1 (91.872121ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-pdg4h, age: 7m40.706816777s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-177998 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-177998 top pods -n kube-system: exit status 1 (91.684349ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-pdg4h, age: 9m9.025189364s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-177998 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-177998 top pods -n kube-system: exit status 1 (94.474878ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-pdg4h, age: 9m42.945831054s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-177998 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-177998
helpers_test.go:235: (dbg) docker inspect addons-177998:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f371aab230120518493246b9f37f215fab3fc93241af53afb4e52cbc7c3db99c",
	        "Created": "2024-08-15T00:39:31.885620063Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1405563,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-15T00:39:32.048929554Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2b339a1cac4376103734d3066f7ccdf0ac7377a2f8f8d5eb9e81c29f3abcec50",
	        "ResolvConfPath": "/var/lib/docker/containers/f371aab230120518493246b9f37f215fab3fc93241af53afb4e52cbc7c3db99c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f371aab230120518493246b9f37f215fab3fc93241af53afb4e52cbc7c3db99c/hostname",
	        "HostsPath": "/var/lib/docker/containers/f371aab230120518493246b9f37f215fab3fc93241af53afb4e52cbc7c3db99c/hosts",
	        "LogPath": "/var/lib/docker/containers/f371aab230120518493246b9f37f215fab3fc93241af53afb4e52cbc7c3db99c/f371aab230120518493246b9f37f215fab3fc93241af53afb4e52cbc7c3db99c-json.log",
	        "Name": "/addons-177998",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-177998:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-177998",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5c37f7a13c271ecb6e5a866ea04cdedad952da520e99e0cba0cbc34549f2fd75-init/diff:/var/lib/docker/overlay2/433fc574d59582b9724e66836c411c49856e3ca47c5bf1f4fddf41d4347d66bc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5c37f7a13c271ecb6e5a866ea04cdedad952da520e99e0cba0cbc34549f2fd75/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5c37f7a13c271ecb6e5a866ea04cdedad952da520e99e0cba0cbc34549f2fd75/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5c37f7a13c271ecb6e5a866ea04cdedad952da520e99e0cba0cbc34549f2fd75/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-177998",
	                "Source": "/var/lib/docker/volumes/addons-177998/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-177998",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-177998",
	                "name.minikube.sigs.k8s.io": "addons-177998",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fcd529918dc8229ec8a14529dbf4ae2d92130c18352d82a20722b9bb641475d5",
	            "SandboxKey": "/var/run/docker/netns/fcd529918dc8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34600"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34601"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34604"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34602"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34603"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-177998": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "19e67a6599deee7485939663658b47300858ad6be1ef7a9abf09d7eb7eba7567",
	                    "EndpointID": "74fb35ccc1b99b398bdd521b351edda008d5fd38a090f82aee77223fcdba796c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-177998",
	                        "f371aab23012"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-177998 -n addons-177998
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-177998 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-177998 logs -n 25: (1.436897005s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-283129 | jenkins | v1.33.1 | 15 Aug 24 00:39 UTC |                     |
	|         | download-docker-283129                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-283129                                                                   | download-docker-283129 | jenkins | v1.33.1 | 15 Aug 24 00:39 UTC | 15 Aug 24 00:39 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-338566   | jenkins | v1.33.1 | 15 Aug 24 00:39 UTC |                     |
	|         | binary-mirror-338566                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:37403                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-338566                                                                     | binary-mirror-338566   | jenkins | v1.33.1 | 15 Aug 24 00:39 UTC | 15 Aug 24 00:39 UTC |
	| addons  | disable dashboard -p                                                                        | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:39 UTC |                     |
	|         | addons-177998                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:39 UTC |                     |
	|         | addons-177998                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-177998 --wait=true                                                                | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:39 UTC | 15 Aug 24 00:42 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-177998 addons disable                                                                | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:42 UTC | 15 Aug 24 00:43 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-177998 ip                                                                            | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:43 UTC | 15 Aug 24 00:43 UTC |
	| addons  | addons-177998 addons disable                                                                | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:43 UTC | 15 Aug 24 00:43 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-177998 addons disable                                                                | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:43 UTC | 15 Aug 24 00:43 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:43 UTC | 15 Aug 24 00:43 UTC |
	|         | -p addons-177998                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-177998 ssh cat                                                                       | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:43 UTC | 15 Aug 24 00:43 UTC |
	|         | /opt/local-path-provisioner/pvc-2ebb18e5-943e-4735-a7ec-2a8e78491a99_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-177998 addons disable                                                                | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:43 UTC | 15 Aug 24 00:43 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-177998 addons                                                                        | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:43 UTC | 15 Aug 24 00:44 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:43 UTC | 15 Aug 24 00:43 UTC |
	|         | addons-177998                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:43 UTC | 15 Aug 24 00:43 UTC |
	|         | -p addons-177998                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-177998 addons                                                                        | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:44 UTC | 15 Aug 24 00:44 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-177998 addons disable                                                                | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:44 UTC | 15 Aug 24 00:44 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:44 UTC | 15 Aug 24 00:44 UTC |
	|         | addons-177998                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-177998 ssh curl -s                                                                   | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:44 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-177998 ip                                                                            | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:46 UTC | 15 Aug 24 00:46 UTC |
	| addons  | addons-177998 addons disable                                                                | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:46 UTC | 15 Aug 24 00:46 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-177998 addons disable                                                                | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:46 UTC | 15 Aug 24 00:46 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-177998 addons                                                                        | addons-177998          | jenkins | v1.33.1 | 15 Aug 24 00:49 UTC | 15 Aug 24 00:49 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 00:39:06
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 00:39:06.663435 1405068 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:39:06.663683 1405068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:39:06.663712 1405068 out.go:304] Setting ErrFile to fd 2...
	I0815 00:39:06.663732 1405068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:39:06.664028 1405068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-1398913/.minikube/bin
	I0815 00:39:06.664528 1405068 out.go:298] Setting JSON to false
	I0815 00:39:06.665471 1405068 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":33689,"bootTime":1723648658,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0815 00:39:06.665590 1405068 start.go:139] virtualization:  
	I0815 00:39:06.668036 1405068 out.go:177] * [addons-177998] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0815 00:39:06.670279 1405068 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 00:39:06.670349 1405068 notify.go:220] Checking for updates...
	I0815 00:39:06.673720 1405068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 00:39:06.675387 1405068 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-1398913/kubeconfig
	I0815 00:39:06.677145 1405068 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-1398913/.minikube
	I0815 00:39:06.678840 1405068 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0815 00:39:06.680662 1405068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 00:39:06.682694 1405068 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 00:39:06.704497 1405068 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 00:39:06.704615 1405068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:39:06.769250 1405068 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-15 00:39:06.758683232 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214908928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 00:39:06.769373 1405068 docker.go:307] overlay module found
	I0815 00:39:06.771848 1405068 out.go:177] * Using the docker driver based on user configuration
	I0815 00:39:06.773693 1405068 start.go:297] selected driver: docker
	I0815 00:39:06.773713 1405068 start.go:901] validating driver "docker" against <nil>
	I0815 00:39:06.773727 1405068 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 00:39:06.774365 1405068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:39:06.826851 1405068 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-15 00:39:06.817180385 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214908928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 00:39:06.827037 1405068 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 00:39:06.827282 1405068 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 00:39:06.829488 1405068 out.go:177] * Using Docker driver with root privileges
	I0815 00:39:06.831656 1405068 cni.go:84] Creating CNI manager for ""
	I0815 00:39:06.831681 1405068 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0815 00:39:06.831695 1405068 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0815 00:39:06.831788 1405068 start.go:340] cluster config:
	{Name:addons-177998 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-177998 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:39:06.833954 1405068 out.go:177] * Starting "addons-177998" primary control-plane node in "addons-177998" cluster
	I0815 00:39:06.835832 1405068 cache.go:121] Beginning downloading kic base image for docker with crio
	I0815 00:39:06.837986 1405068 out.go:177] * Pulling base image v0.0.44-1723650208-19443 ...
	I0815 00:39:06.839688 1405068 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:39:06.839732 1405068 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	I0815 00:39:06.839744 1405068 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19443-1398913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	I0815 00:39:06.839753 1405068 cache.go:56] Caching tarball of preloaded images
	I0815 00:39:06.839831 1405068 preload.go:172] Found /home/jenkins/minikube-integration/19443-1398913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0815 00:39:06.839841 1405068 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 00:39:06.840226 1405068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/config.json ...
	I0815 00:39:06.840282 1405068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/config.json: {Name:mk96a96c05a74a2b5c03f13fa38572f835869738 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:39:06.857568 1405068 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 00:39:06.857758 1405068 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0815 00:39:06.857790 1405068 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory, skipping pull
	I0815 00:39:06.857795 1405068 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 exists in cache, skipping pull
	I0815 00:39:06.857813 1405068 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 as a tarball
	I0815 00:39:06.857819 1405068 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from local cache
	I0815 00:39:23.609566 1405068 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from cached tarball
	I0815 00:39:23.609608 1405068 cache.go:194] Successfully downloaded all kic artifacts
	I0815 00:39:23.609650 1405068 start.go:360] acquireMachinesLock for addons-177998: {Name:mk8732f60cab24aa263ea51a6dc6ae45b69ed64f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 00:39:23.610415 1405068 start.go:364] duration metric: took 716.155µs to acquireMachinesLock for "addons-177998"
	I0815 00:39:23.610460 1405068 start.go:93] Provisioning new machine with config: &{Name:addons-177998 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-177998 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 00:39:23.610561 1405068 start.go:125] createHost starting for "" (driver="docker")
	I0815 00:39:23.613009 1405068 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0815 00:39:23.613258 1405068 start.go:159] libmachine.API.Create for "addons-177998" (driver="docker")
	I0815 00:39:23.613292 1405068 client.go:168] LocalClient.Create starting
	I0815 00:39:23.613401 1405068 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem
	I0815 00:39:24.150713 1405068 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/cert.pem
	I0815 00:39:24.620546 1405068 cli_runner.go:164] Run: docker network inspect addons-177998 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0815 00:39:24.636727 1405068 cli_runner.go:211] docker network inspect addons-177998 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0815 00:39:24.636817 1405068 network_create.go:284] running [docker network inspect addons-177998] to gather additional debugging logs...
	I0815 00:39:24.636837 1405068 cli_runner.go:164] Run: docker network inspect addons-177998
	W0815 00:39:24.650962 1405068 cli_runner.go:211] docker network inspect addons-177998 returned with exit code 1
	I0815 00:39:24.650995 1405068 network_create.go:287] error running [docker network inspect addons-177998]: docker network inspect addons-177998: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-177998 not found
	I0815 00:39:24.651010 1405068 network_create.go:289] output of [docker network inspect addons-177998]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-177998 not found
	
	** /stderr **
	I0815 00:39:24.651106 1405068 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0815 00:39:24.669163 1405068 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400183be60}
	I0815 00:39:24.669205 1405068 network_create.go:124] attempt to create docker network addons-177998 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0815 00:39:24.669260 1405068 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-177998 addons-177998
	I0815 00:39:24.745263 1405068 network_create.go:108] docker network addons-177998 192.168.49.0/24 created
	I0815 00:39:24.745298 1405068 kic.go:121] calculated static IP "192.168.49.2" for the "addons-177998" container
	I0815 00:39:24.745375 1405068 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0815 00:39:24.760833 1405068 cli_runner.go:164] Run: docker volume create addons-177998 --label name.minikube.sigs.k8s.io=addons-177998 --label created_by.minikube.sigs.k8s.io=true
	I0815 00:39:24.779056 1405068 oci.go:103] Successfully created a docker volume addons-177998
	I0815 00:39:24.779159 1405068 cli_runner.go:164] Run: docker run --rm --name addons-177998-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-177998 --entrypoint /usr/bin/test -v addons-177998:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -d /var/lib
	I0815 00:39:26.844938 1405068 cli_runner.go:217] Completed: docker run --rm --name addons-177998-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-177998 --entrypoint /usr/bin/test -v addons-177998:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -d /var/lib: (2.065719274s)
	I0815 00:39:26.844972 1405068 oci.go:107] Successfully prepared a docker volume addons-177998
	I0815 00:39:26.844985 1405068 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:39:26.845005 1405068 kic.go:194] Starting extracting preloaded images to volume ...
	I0815 00:39:26.845091 1405068 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19443-1398913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-177998:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -I lz4 -xf /preloaded.tar -C /extractDir
	I0815 00:39:31.817942 1405068 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19443-1398913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-177998:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -I lz4 -xf /preloaded.tar -C /extractDir: (4.972796187s)
	I0815 00:39:31.817981 1405068 kic.go:203] duration metric: took 4.972972653s to extract preloaded images to volume ...
	W0815 00:39:31.818121 1405068 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0815 00:39:31.818244 1405068 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0815 00:39:31.870702 1405068 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-177998 --name addons-177998 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-177998 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-177998 --network addons-177998 --ip 192.168.49.2 --volume addons-177998:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002
	I0815 00:39:32.211825 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Running}}
	I0815 00:39:32.232384 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:39:32.259982 1405068 cli_runner.go:164] Run: docker exec addons-177998 stat /var/lib/dpkg/alternatives/iptables
	I0815 00:39:32.338794 1405068 oci.go:144] the created container "addons-177998" has a running status.
	I0815 00:39:32.338824 1405068 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa...
	I0815 00:39:32.486700 1405068 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0815 00:39:32.510882 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:39:32.532865 1405068 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0815 00:39:32.532885 1405068 kic_runner.go:114] Args: [docker exec --privileged addons-177998 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0815 00:39:32.598012 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:39:32.623008 1405068 machine.go:94] provisionDockerMachine start ...
	I0815 00:39:32.623101 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:39:32.651978 1405068 main.go:141] libmachine: Using SSH client type: native
	I0815 00:39:32.652232 1405068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34600 <nil> <nil>}
	I0815 00:39:32.652241 1405068 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 00:39:32.652894 1405068 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53966->127.0.0.1:34600: read: connection reset by peer
	I0815 00:39:35.786107 1405068 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-177998
	
	I0815 00:39:35.786135 1405068 ubuntu.go:169] provisioning hostname "addons-177998"
	I0815 00:39:35.786209 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:39:35.803627 1405068 main.go:141] libmachine: Using SSH client type: native
	I0815 00:39:35.803882 1405068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34600 <nil> <nil>}
	I0815 00:39:35.803899 1405068 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-177998 && echo "addons-177998" | sudo tee /etc/hostname
	I0815 00:39:35.951415 1405068 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-177998
	
	I0815 00:39:35.951500 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:39:35.968769 1405068 main.go:141] libmachine: Using SSH client type: native
	I0815 00:39:35.969024 1405068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34600 <nil> <nil>}
	I0815 00:39:35.969046 1405068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-177998' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-177998/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-177998' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 00:39:36.114811 1405068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 00:39:36.114897 1405068 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19443-1398913/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-1398913/.minikube}
	I0815 00:39:36.114943 1405068 ubuntu.go:177] setting up certificates
	I0815 00:39:36.114979 1405068 provision.go:84] configureAuth start
	I0815 00:39:36.115058 1405068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-177998
	I0815 00:39:36.131944 1405068 provision.go:143] copyHostCerts
	I0815 00:39:36.132047 1405068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.pem (1082 bytes)
	I0815 00:39:36.132179 1405068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-1398913/.minikube/cert.pem (1123 bytes)
	I0815 00:39:36.132243 1405068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-1398913/.minikube/key.pem (1679 bytes)
	I0815 00:39:36.132304 1405068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca-key.pem org=jenkins.addons-177998 san=[127.0.0.1 192.168.49.2 addons-177998 localhost minikube]
	I0815 00:39:36.507093 1405068 provision.go:177] copyRemoteCerts
	I0815 00:39:36.507168 1405068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 00:39:36.507210 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:39:36.523755 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:39:36.620286 1405068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 00:39:36.645918 1405068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 00:39:36.671640 1405068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 00:39:36.696376 1405068 provision.go:87] duration metric: took 581.368275ms to configureAuth
	I0815 00:39:36.696403 1405068 ubuntu.go:193] setting minikube options for container-runtime
	I0815 00:39:36.696597 1405068 config.go:182] Loaded profile config "addons-177998": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:39:36.696719 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:39:36.713720 1405068 main.go:141] libmachine: Using SSH client type: native
	I0815 00:39:36.713978 1405068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34600 <nil> <nil>}
	I0815 00:39:36.713994 1405068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 00:39:36.948596 1405068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 00:39:36.948618 1405068 machine.go:97] duration metric: took 4.325592304s to provisionDockerMachine
	I0815 00:39:36.948628 1405068 client.go:171] duration metric: took 13.33533055s to LocalClient.Create
	I0815 00:39:36.948642 1405068 start.go:167] duration metric: took 13.335384565s to libmachine.API.Create "addons-177998"
	I0815 00:39:36.948649 1405068 start.go:293] postStartSetup for "addons-177998" (driver="docker")
	I0815 00:39:36.948665 1405068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 00:39:36.948748 1405068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 00:39:36.948795 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:39:36.966990 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:39:37.072349 1405068 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 00:39:37.075806 1405068 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0815 00:39:37.075841 1405068 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0815 00:39:37.075852 1405068 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0815 00:39:37.075859 1405068 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0815 00:39:37.075870 1405068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-1398913/.minikube/addons for local assets ...
	I0815 00:39:37.075942 1405068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-1398913/.minikube/files for local assets ...
	I0815 00:39:37.075964 1405068 start.go:296] duration metric: took 127.308685ms for postStartSetup
	I0815 00:39:37.076288 1405068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-177998
	I0815 00:39:37.092489 1405068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/config.json ...
	I0815 00:39:37.092798 1405068 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:39:37.092842 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:39:37.109522 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:39:37.203168 1405068 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0815 00:39:37.207415 1405068 start.go:128] duration metric: took 13.596836018s to createHost
	I0815 00:39:37.207444 1405068 start.go:83] releasing machines lock for "addons-177998", held for 13.597006297s
	I0815 00:39:37.207517 1405068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-177998
	I0815 00:39:37.228753 1405068 ssh_runner.go:195] Run: cat /version.json
	I0815 00:39:37.228817 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:39:37.229057 1405068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 00:39:37.229117 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:39:37.253491 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:39:37.254482 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:39:37.345931 1405068 ssh_runner.go:195] Run: systemctl --version
	I0815 00:39:37.476469 1405068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 00:39:37.619185 1405068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 00:39:37.623434 1405068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 00:39:37.643128 1405068 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0815 00:39:37.643207 1405068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 00:39:37.675635 1405068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0815 00:39:37.675711 1405068 start.go:495] detecting cgroup driver to use...
	I0815 00:39:37.675760 1405068 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0815 00:39:37.675844 1405068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 00:39:37.691697 1405068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 00:39:37.703404 1405068 docker.go:217] disabling cri-docker service (if available) ...
	I0815 00:39:37.703488 1405068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 00:39:37.717725 1405068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 00:39:37.732729 1405068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 00:39:37.822468 1405068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 00:39:37.927047 1405068 docker.go:233] disabling docker service ...
	I0815 00:39:37.927141 1405068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 00:39:37.947551 1405068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 00:39:37.963430 1405068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 00:39:38.064078 1405068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 00:39:38.165031 1405068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 00:39:38.176454 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 00:39:38.194244 1405068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 00:39:38.194356 1405068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:39:38.204489 1405068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 00:39:38.204575 1405068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:39:38.214847 1405068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:39:38.225006 1405068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:39:38.235204 1405068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 00:39:38.244680 1405068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:39:38.254960 1405068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:39:38.271170 1405068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:39:38.281068 1405068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 00:39:38.289951 1405068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 00:39:38.298253 1405068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:39:38.383214 1405068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 00:39:38.501507 1405068 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 00:39:38.501687 1405068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 00:39:38.505695 1405068 start.go:563] Will wait 60s for crictl version
	I0815 00:39:38.505778 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:39:38.509206 1405068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 00:39:38.547942 1405068 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0815 00:39:38.548053 1405068 ssh_runner.go:195] Run: crio --version
	I0815 00:39:38.588730 1405068 ssh_runner.go:195] Run: crio --version
	I0815 00:39:38.634077 1405068 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0815 00:39:38.636156 1405068 cli_runner.go:164] Run: docker network inspect addons-177998 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0815 00:39:38.650117 1405068 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0815 00:39:38.653667 1405068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 00:39:38.664210 1405068 kubeadm.go:883] updating cluster {Name:addons-177998 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-177998 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 00:39:38.664337 1405068 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:39:38.664398 1405068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 00:39:38.746727 1405068 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 00:39:38.746753 1405068 crio.go:433] Images already preloaded, skipping extraction
	I0815 00:39:38.746810 1405068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 00:39:38.783967 1405068 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 00:39:38.783990 1405068 cache_images.go:84] Images are preloaded, skipping loading
	I0815 00:39:38.783999 1405068 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0815 00:39:38.784099 1405068 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-177998 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-177998 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 00:39:38.784182 1405068 ssh_runner.go:195] Run: crio config
	I0815 00:39:38.834517 1405068 cni.go:84] Creating CNI manager for ""
	I0815 00:39:38.834541 1405068 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0815 00:39:38.834555 1405068 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 00:39:38.834578 1405068 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-177998 NodeName:addons-177998 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 00:39:38.834729 1405068 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-177998"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 00:39:38.834801 1405068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 00:39:38.843622 1405068 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 00:39:38.843688 1405068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 00:39:38.852416 1405068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0815 00:39:38.870695 1405068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 00:39:38.888484 1405068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0815 00:39:38.905935 1405068 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0815 00:39:38.909139 1405068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 00:39:38.919754 1405068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:39:39.010933 1405068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 00:39:39.026856 1405068 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998 for IP: 192.168.49.2
	I0815 00:39:39.026882 1405068 certs.go:194] generating shared ca certs ...
	I0815 00:39:39.026903 1405068 certs.go:226] acquiring lock for ca certs: {Name:mk7828e60149aaf109ce40cae2b300a118fa9ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:39:39.027089 1405068 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.key
	I0815 00:39:39.673208 1405068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.crt ...
	I0815 00:39:39.673239 1405068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.crt: {Name:mk659c4665d9208d9ef76dc441880ade749b2196 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:39:39.673916 1405068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.key ...
	I0815 00:39:39.673942 1405068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.key: {Name:mk7c20db56fb05eddb03e1dd8e898401e59f742b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:39:39.674521 1405068 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/proxy-client-ca.key
	I0815 00:39:41.073877 1405068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-1398913/.minikube/proxy-client-ca.crt ...
	I0815 00:39:41.073914 1405068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-1398913/.minikube/proxy-client-ca.crt: {Name:mk724f5e477d1fd6ee1f18d46c189d4c01d6ab13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:39:41.074117 1405068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-1398913/.minikube/proxy-client-ca.key ...
	I0815 00:39:41.074130 1405068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-1398913/.minikube/proxy-client-ca.key: {Name:mka2c5a0dd947cf79d708cfa9e77fea7155b512c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:39:41.074648 1405068 certs.go:256] generating profile certs ...
	I0815 00:39:41.074721 1405068 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.key
	I0815 00:39:41.074741 1405068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt with IP's: []
	I0815 00:39:41.752228 1405068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt ...
	I0815 00:39:41.752262 1405068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: {Name:mkd09339e85b8a905e0a8958e21d9c814968d75e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:39:41.752463 1405068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.key ...
	I0815 00:39:41.752477 1405068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.key: {Name:mk55bc47ab12616c4255067ff759edcf2329fb1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:39:41.753105 1405068 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/apiserver.key.947b2780
	I0815 00:39:41.753133 1405068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/apiserver.crt.947b2780 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0815 00:39:42.725554 1405068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/apiserver.crt.947b2780 ...
	I0815 00:39:42.725632 1405068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/apiserver.crt.947b2780: {Name:mk1436a0e1ea41f2fbf1ac4f9e43de05848aff73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:39:42.725867 1405068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/apiserver.key.947b2780 ...
	I0815 00:39:42.725916 1405068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/apiserver.key.947b2780: {Name:mkf95c7cc21a34bed24d68ecd28119b08a923a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:39:42.726681 1405068 certs.go:381] copying /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/apiserver.crt.947b2780 -> /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/apiserver.crt
	I0815 00:39:42.726851 1405068 certs.go:385] copying /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/apiserver.key.947b2780 -> /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/apiserver.key
	I0815 00:39:42.726976 1405068 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/proxy-client.key
	I0815 00:39:42.727020 1405068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/proxy-client.crt with IP's: []
	I0815 00:39:43.649164 1405068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/proxy-client.crt ...
	I0815 00:39:43.649200 1405068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/proxy-client.crt: {Name:mk36d3fb042007f51c19030fb1645bce43faeb2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:39:43.649398 1405068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/proxy-client.key ...
	I0815 00:39:43.649415 1405068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/proxy-client.key: {Name:mk79bdfa1392256a1100760467793cf2016af1ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:39:43.649602 1405068 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 00:39:43.649649 1405068 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem (1082 bytes)
	I0815 00:39:43.649680 1405068 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/cert.pem (1123 bytes)
	I0815 00:39:43.649708 1405068 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/key.pem (1679 bytes)
	I0815 00:39:43.650307 1405068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 00:39:43.675240 1405068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 00:39:43.699115 1405068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 00:39:43.723872 1405068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 00:39:43.748538 1405068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0815 00:39:43.773397 1405068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 00:39:43.798046 1405068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 00:39:43.823599 1405068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 00:39:43.848242 1405068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 00:39:43.872172 1405068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 00:39:43.890080 1405068 ssh_runner.go:195] Run: openssl version
	I0815 00:39:43.895603 1405068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 00:39:43.905057 1405068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:39:43.908592 1405068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:39 /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:39:43.908671 1405068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:39:43.915663 1405068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 00:39:43.925215 1405068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 00:39:43.928491 1405068 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 00:39:43.928543 1405068 kubeadm.go:392] StartCluster: {Name:addons-177998 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-177998 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:39:43.928627 1405068 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 00:39:43.928687 1405068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 00:39:43.965177 1405068 cri.go:89] found id: ""
	I0815 00:39:43.965286 1405068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 00:39:43.974109 1405068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 00:39:43.983003 1405068 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0815 00:39:43.983072 1405068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 00:39:43.992064 1405068 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 00:39:43.992098 1405068 kubeadm.go:157] found existing configuration files:
	
	I0815 00:39:43.992189 1405068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 00:39:44.001596 1405068 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 00:39:44.001770 1405068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 00:39:44.017829 1405068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 00:39:44.027370 1405068 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 00:39:44.027479 1405068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 00:39:44.036435 1405068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 00:39:44.045908 1405068 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 00:39:44.045983 1405068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 00:39:44.055213 1405068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 00:39:44.064441 1405068 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 00:39:44.064531 1405068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 00:39:44.073352 1405068 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0815 00:39:44.114035 1405068 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 00:39:44.114117 1405068 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 00:39:44.133839 1405068 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0815 00:39:44.133916 1405068 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-aws
	I0815 00:39:44.133957 1405068 kubeadm.go:310] OS: Linux
	I0815 00:39:44.134004 1405068 kubeadm.go:310] CGROUPS_CPU: enabled
	I0815 00:39:44.134055 1405068 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0815 00:39:44.134104 1405068 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0815 00:39:44.134154 1405068 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0815 00:39:44.134203 1405068 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0815 00:39:44.134258 1405068 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0815 00:39:44.134305 1405068 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0815 00:39:44.134355 1405068 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0815 00:39:44.134415 1405068 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0815 00:39:44.201757 1405068 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 00:39:44.201878 1405068 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 00:39:44.201974 1405068 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 00:39:44.210830 1405068 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 00:39:44.215296 1405068 out.go:204]   - Generating certificates and keys ...
	I0815 00:39:44.215397 1405068 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 00:39:44.215470 1405068 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 00:39:44.480998 1405068 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0815 00:39:45.003152 1405068 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0815 00:39:45.539961 1405068 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0815 00:39:45.904140 1405068 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0815 00:39:46.259344 1405068 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0815 00:39:46.259676 1405068 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-177998 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0815 00:39:47.173704 1405068 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0815 00:39:47.173850 1405068 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-177998 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0815 00:39:47.933105 1405068 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0815 00:39:48.414713 1405068 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0815 00:39:49.406063 1405068 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0815 00:39:49.406294 1405068 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 00:39:49.785304 1405068 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 00:39:50.298827 1405068 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 00:39:50.785106 1405068 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 00:39:51.037279 1405068 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 00:39:51.650179 1405068 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 00:39:51.650966 1405068 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 00:39:51.654070 1405068 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 00:39:51.656405 1405068 out.go:204]   - Booting up control plane ...
	I0815 00:39:51.656510 1405068 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 00:39:51.656591 1405068 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 00:39:51.657344 1405068 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 00:39:51.672944 1405068 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 00:39:51.679100 1405068 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 00:39:51.679380 1405068 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 00:39:51.779425 1405068 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 00:39:51.779978 1405068 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 00:39:53.282180 1405068 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.502227525s
	I0815 00:39:53.282278 1405068 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 00:39:59.783576 1405068 kubeadm.go:310] [api-check] The API server is healthy after 6.501405475s
	I0815 00:39:59.804259 1405068 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 00:39:59.819242 1405068 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 00:39:59.843823 1405068 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 00:39:59.844014 1405068 kubeadm.go:310] [mark-control-plane] Marking the node addons-177998 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 00:39:59.854635 1405068 kubeadm.go:310] [bootstrap-token] Using token: py4gee.yjt56wgqozwbhy5y
	I0815 00:39:59.857957 1405068 out.go:204]   - Configuring RBAC rules ...
	I0815 00:39:59.858088 1405068 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 00:39:59.862181 1405068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 00:39:59.870590 1405068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 00:39:59.875596 1405068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 00:39:59.879516 1405068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 00:39:59.883337 1405068 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 00:40:00.207046 1405068 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 00:40:00.818575 1405068 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 00:40:01.192091 1405068 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 00:40:01.194626 1405068 kubeadm.go:310] 
	I0815 00:40:01.194715 1405068 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 00:40:01.194726 1405068 kubeadm.go:310] 
	I0815 00:40:01.194804 1405068 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 00:40:01.194812 1405068 kubeadm.go:310] 
	I0815 00:40:01.194837 1405068 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 00:40:01.194899 1405068 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 00:40:01.194950 1405068 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 00:40:01.194961 1405068 kubeadm.go:310] 
	I0815 00:40:01.195013 1405068 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 00:40:01.195023 1405068 kubeadm.go:310] 
	I0815 00:40:01.195069 1405068 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 00:40:01.195077 1405068 kubeadm.go:310] 
	I0815 00:40:01.195127 1405068 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 00:40:01.195202 1405068 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 00:40:01.195271 1405068 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 00:40:01.195279 1405068 kubeadm.go:310] 
	I0815 00:40:01.195359 1405068 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 00:40:01.195438 1405068 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 00:40:01.195445 1405068 kubeadm.go:310] 
	I0815 00:40:01.195525 1405068 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token py4gee.yjt56wgqozwbhy5y \
	I0815 00:40:01.195628 1405068 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e6084f0db819136e4eac5633399139c1200997e817605c079edabc35a775495a \
	I0815 00:40:01.195651 1405068 kubeadm.go:310] 	--control-plane 
	I0815 00:40:01.195658 1405068 kubeadm.go:310] 
	I0815 00:40:01.195740 1405068 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 00:40:01.195748 1405068 kubeadm.go:310] 
	I0815 00:40:01.195827 1405068 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token py4gee.yjt56wgqozwbhy5y \
	I0815 00:40:01.195928 1405068 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e6084f0db819136e4eac5633399139c1200997e817605c079edabc35a775495a 
	I0815 00:40:01.201146 1405068 kubeadm.go:310] W0815 00:39:44.110804    1203 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 00:40:01.201442 1405068 kubeadm.go:310] W0815 00:39:44.111701    1203 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 00:40:01.201656 1405068 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1067-aws\n", err: exit status 1
	I0815 00:40:01.201769 1405068 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 00:40:01.201791 1405068 cni.go:84] Creating CNI manager for ""
	I0815 00:40:01.201805 1405068 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0815 00:40:01.203954 1405068 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0815 00:40:01.205830 1405068 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0815 00:40:01.210744 1405068 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0815 00:40:01.210766 1405068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0815 00:40:01.233214 1405068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0815 00:40:01.528947 1405068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 00:40:01.529098 1405068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:40:01.529101 1405068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-177998 minikube.k8s.io/updated_at=2024_08_15T00_40_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168 minikube.k8s.io/name=addons-177998 minikube.k8s.io/primary=true
	I0815 00:40:01.545264 1405068 ops.go:34] apiserver oom_adj: -16
	I0815 00:40:01.644762 1405068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:40:02.145805 1405068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:40:02.644876 1405068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:40:03.144896 1405068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:40:03.645370 1405068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:40:04.144807 1405068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:40:04.645732 1405068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:40:05.145444 1405068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:40:05.644836 1405068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:40:05.738045 1405068 kubeadm.go:1113] duration metric: took 4.209008953s to wait for elevateKubeSystemPrivileges
	I0815 00:40:05.738078 1405068 kubeadm.go:394] duration metric: took 21.809539468s to StartCluster
	I0815 00:40:05.738100 1405068 settings.go:142] acquiring lock: {Name:mk702991e0e1159812b2000a3112e7b24af8d662 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:40:05.739032 1405068 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-1398913/kubeconfig
	I0815 00:40:05.739430 1405068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-1398913/kubeconfig: {Name:mkbc924cd270a9bf83bc63fe6d76f87df76fc38b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:40:05.739637 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0815 00:40:05.739658 1405068 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 00:40:05.739931 1405068 config.go:182] Loaded profile config "addons-177998": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:40:05.739961 1405068 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0815 00:40:05.740054 1405068 addons.go:69] Setting yakd=true in profile "addons-177998"
	I0815 00:40:05.740077 1405068 addons.go:234] Setting addon yakd=true in "addons-177998"
	I0815 00:40:05.740104 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:05.740545 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.741012 1405068 addons.go:69] Setting cloud-spanner=true in profile "addons-177998"
	I0815 00:40:05.741048 1405068 addons.go:234] Setting addon cloud-spanner=true in "addons-177998"
	I0815 00:40:05.741067 1405068 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-177998"
	I0815 00:40:05.741103 1405068 addons.go:69] Setting storage-provisioner=true in profile "addons-177998"
	I0815 00:40:05.741124 1405068 addons.go:234] Setting addon storage-provisioner=true in "addons-177998"
	I0815 00:40:05.741143 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:05.741163 1405068 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-177998"
	I0815 00:40:05.741217 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:05.741591 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.741723 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.743237 1405068 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-177998"
	I0815 00:40:05.743306 1405068 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-177998"
	I0815 00:40:05.743336 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:05.743788 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.744292 1405068 addons.go:69] Setting default-storageclass=true in profile "addons-177998"
	I0815 00:40:05.744331 1405068 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-177998"
	I0815 00:40:05.744600 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.754730 1405068 addons.go:69] Setting gcp-auth=true in profile "addons-177998"
	I0815 00:40:05.754798 1405068 mustload.go:65] Loading cluster: addons-177998
	I0815 00:40:05.755022 1405068 config.go:182] Loaded profile config "addons-177998": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:40:05.755440 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.762167 1405068 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-177998"
	I0815 00:40:05.762320 1405068 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-177998"
	I0815 00:40:05.763075 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.767612 1405068 addons.go:69] Setting ingress=true in profile "addons-177998"
	I0815 00:40:05.767659 1405068 addons.go:234] Setting addon ingress=true in "addons-177998"
	I0815 00:40:05.767703 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:05.768150 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.779925 1405068 addons.go:69] Setting volcano=true in profile "addons-177998"
	I0815 00:40:05.779973 1405068 addons.go:234] Setting addon volcano=true in "addons-177998"
	I0815 00:40:05.780009 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:05.780463 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.798768 1405068 addons.go:69] Setting ingress-dns=true in profile "addons-177998"
	I0815 00:40:05.798861 1405068 addons.go:234] Setting addon ingress-dns=true in "addons-177998"
	I0815 00:40:05.798919 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:05.799391 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.812397 1405068 addons.go:69] Setting inspektor-gadget=true in profile "addons-177998"
	I0815 00:40:05.812523 1405068 addons.go:69] Setting volumesnapshots=true in profile "addons-177998"
	I0815 00:40:05.812589 1405068 addons.go:234] Setting addon volumesnapshots=true in "addons-177998"
	I0815 00:40:05.812672 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:05.812718 1405068 addons.go:234] Setting addon inspektor-gadget=true in "addons-177998"
	I0815 00:40:05.812849 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:05.813251 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.822810 1405068 out.go:177] * Verifying Kubernetes components...
	I0815 00:40:05.825525 1405068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:40:05.837404 1405068 addons.go:69] Setting metrics-server=true in profile "addons-177998"
	I0815 00:40:05.837505 1405068 addons.go:234] Setting addon metrics-server=true in "addons-177998"
	I0815 00:40:05.837579 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:05.838205 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.841808 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.741079 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:05.848147 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.741096 1405068 addons.go:69] Setting registry=true in profile "addons-177998"
	I0815 00:40:05.849638 1405068 addons.go:234] Setting addon registry=true in "addons-177998"
	I0815 00:40:05.849692 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:05.850197 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.862711 1405068 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0815 00:40:05.862940 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:05.867350 1405068 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 00:40:05.869369 1405068 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 00:40:05.869387 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 00:40:05.869447 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:40:05.864873 1405068 addons.go:234] Setting addon default-storageclass=true in "addons-177998"
	I0815 00:40:05.870607 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:05.871043 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.897478 1405068 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0815 00:40:05.897502 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0815 00:40:05.897570 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:40:05.899011 1405068 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	W0815 00:40:05.900463 1405068 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0815 00:40:05.919468 1405068 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0815 00:40:05.922193 1405068 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0815 00:40:05.924099 1405068 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0815 00:40:05.925950 1405068 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0815 00:40:05.928114 1405068 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0815 00:40:05.928161 1405068 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0815 00:40:05.928240 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:40:05.929012 1405068 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-177998"
	I0815 00:40:05.929046 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:05.929445 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:05.928130 1405068 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0815 00:40:05.954529 1405068 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0815 00:40:05.959494 1405068 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0815 00:40:05.960705 1405068 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 00:40:05.970996 1405068 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 00:40:05.989836 1405068 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0815 00:40:06.000891 1405068 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0815 00:40:06.004583 1405068 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0815 00:40:06.004666 1405068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0815 00:40:06.004786 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:40:06.005019 1405068 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0815 00:40:06.005060 1405068 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0815 00:40:06.005145 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:40:06.036154 1405068 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0815 00:40:06.036323 1405068 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0815 00:40:06.042017 1405068 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0815 00:40:06.042088 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0815 00:40:06.042196 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:40:06.042587 1405068 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0815 00:40:06.042630 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0815 00:40:06.042713 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:40:06.100589 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:40:06.100613 1405068 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 00:40:06.100629 1405068 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 00:40:06.100700 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:40:06.106709 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:40:06.108176 1405068 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0815 00:40:06.108619 1405068 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0815 00:40:06.109970 1405068 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0815 00:40:06.110923 1405068 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0815 00:40:06.113080 1405068 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0815 00:40:06.113100 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0815 00:40:06.113172 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:40:06.117677 1405068 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0815 00:40:06.117708 1405068 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0815 00:40:06.117786 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:40:06.120681 1405068 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 00:40:06.120709 1405068 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 00:40:06.120779 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:40:06.150522 1405068 out.go:177]   - Using image docker.io/busybox:stable
	I0815 00:40:06.150876 1405068 out.go:177]   - Using image docker.io/registry:2.8.3
	I0815 00:40:06.151451 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:40:06.160032 1405068 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0815 00:40:06.160054 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0815 00:40:06.160120 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:40:06.162110 1405068 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0815 00:40:06.166623 1405068 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0815 00:40:06.166661 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0815 00:40:06.166745 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:40:06.205416 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:40:06.206332 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:40:06.224063 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:40:06.226433 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:40:06.231789 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:40:06.243709 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:40:06.246369 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:40:06.290507 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:40:06.292829 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:40:06.306610 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:40:06.462219 1405068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 00:40:06.467044 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0815 00:40:06.467144 1405068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 00:40:06.493291 1405068 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0815 00:40:06.493316 1405068 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0815 00:40:06.551137 1405068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0815 00:40:06.609221 1405068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0815 00:40:06.680236 1405068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 00:40:06.682509 1405068 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0815 00:40:06.682533 1405068 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0815 00:40:06.686177 1405068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0815 00:40:06.700144 1405068 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0815 00:40:06.700170 1405068 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0815 00:40:06.703583 1405068 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0815 00:40:06.703604 1405068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0815 00:40:06.709199 1405068 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0815 00:40:06.709224 1405068 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0815 00:40:06.727470 1405068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0815 00:40:06.745916 1405068 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0815 00:40:06.745949 1405068 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0815 00:40:06.747746 1405068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0815 00:40:06.757669 1405068 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 00:40:06.757697 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0815 00:40:06.873990 1405068 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0815 00:40:06.874016 1405068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0815 00:40:06.890242 1405068 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0815 00:40:06.890266 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0815 00:40:06.904622 1405068 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0815 00:40:06.904648 1405068 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0815 00:40:06.908034 1405068 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0815 00:40:06.908063 1405068 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0815 00:40:06.933834 1405068 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0815 00:40:06.933860 1405068 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0815 00:40:07.005712 1405068 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 00:40:07.005743 1405068 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 00:40:07.047039 1405068 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0815 00:40:07.047071 1405068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0815 00:40:07.074256 1405068 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0815 00:40:07.074281 1405068 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0815 00:40:07.096973 1405068 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0815 00:40:07.096999 1405068 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0815 00:40:07.186125 1405068 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0815 00:40:07.186148 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0815 00:40:07.203350 1405068 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0815 00:40:07.203388 1405068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0815 00:40:07.206558 1405068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0815 00:40:07.208132 1405068 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0815 00:40:07.208154 1405068 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0815 00:40:07.225691 1405068 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0815 00:40:07.225717 1405068 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0815 00:40:07.265874 1405068 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 00:40:07.265905 1405068 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 00:40:07.353238 1405068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0815 00:40:07.360318 1405068 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0815 00:40:07.360348 1405068 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0815 00:40:07.363865 1405068 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0815 00:40:07.363891 1405068 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0815 00:40:07.407212 1405068 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 00:40:07.407233 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0815 00:40:07.474144 1405068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 00:40:07.499546 1405068 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0815 00:40:07.499572 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0815 00:40:07.510485 1405068 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0815 00:40:07.510516 1405068 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0815 00:40:07.614174 1405068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 00:40:07.618934 1405068 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0815 00:40:07.618959 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0815 00:40:07.664656 1405068 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0815 00:40:07.664690 1405068 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0815 00:40:07.730817 1405068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0815 00:40:07.797601 1405068 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0815 00:40:07.797627 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0815 00:40:07.892037 1405068 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0815 00:40:07.892063 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0815 00:40:07.923596 1405068 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0815 00:40:07.923621 1405068 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0815 00:40:07.985836 1405068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0815 00:40:11.372273 1405068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.910013728s)
	I0815 00:40:11.372399 1405068 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.905236443s)
	I0815 00:40:11.372600 1405068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.821309372s)
	I0815 00:40:11.372415 1405068 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.90534493s)
	I0815 00:40:11.372705 1405068 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0815 00:40:11.374348 1405068 node_ready.go:35] waiting up to 6m0s for node "addons-177998" to be "Ready" ...
	I0815 00:40:12.002430 1405068 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-177998" context rescaled to 1 replicas
	I0815 00:40:13.033882 1405068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.353620146s)
	I0815 00:40:13.034024 1405068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.424526875s)
	I0815 00:40:13.034061 1405068 addons.go:475] Verifying addon ingress=true in "addons-177998"
	I0815 00:40:13.034455 1405068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.348251144s)
	I0815 00:40:13.034529 1405068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.307035746s)
	I0815 00:40:13.034561 1405068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.286796308s)
	I0815 00:40:13.034585 1405068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.828004413s)
	I0815 00:40:13.035025 1405068 addons.go:475] Verifying addon registry=true in "addons-177998"
	I0815 00:40:13.034621 1405068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.681356176s)
	I0815 00:40:13.034671 1405068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.560496631s)
	I0815 00:40:13.036027 1405068 addons.go:475] Verifying addon metrics-server=true in "addons-177998"
	I0815 00:40:13.036361 1405068 out.go:177] * Verifying ingress addon...
	I0815 00:40:13.037837 1405068 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-177998 service yakd-dashboard -n yakd-dashboard
	
	I0815 00:40:13.037890 1405068 out.go:177] * Verifying registry addon...
	I0815 00:40:13.038886 1405068 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0815 00:40:13.041323 1405068 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0815 00:40:13.081035 1405068 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0815 00:40:13.081198 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:13.082596 1405068 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0815 00:40:13.082623 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0815 00:40:13.150967 1405068 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0815 00:40:13.274853 1405068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.6606272s)
	W0815 00:40:13.274903 1405068 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0815 00:40:13.274948 1405068 retry.go:31] will retry after 233.192568ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0815 00:40:13.275031 1405068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.544176958s)
	I0815 00:40:13.381672 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:13.508435 1405068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 00:40:13.552388 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:13.558992 1405068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.573107025s)
	I0815 00:40:13.559091 1405068 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-177998"
	I0815 00:40:13.562316 1405068 out.go:177] * Verifying csi-hostpath-driver addon...
	I0815 00:40:13.565689 1405068 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0815 00:40:13.572079 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:13.661179 1405068 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0815 00:40:13.661263 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:14.045418 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:14.046748 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:14.069428 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:14.544063 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:14.545746 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:14.569743 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:15.047789 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:15.049775 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:15.070343 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:15.552346 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:15.553129 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:15.569913 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:15.878757 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:16.051259 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:16.052960 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:16.072225 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:16.339970 1405068 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0815 00:40:16.340085 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:40:16.378599 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:40:16.448178 1405068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.939650521s)
	I0815 00:40:16.547195 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:16.549116 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:16.573298 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:16.583180 1405068 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0815 00:40:16.601978 1405068 addons.go:234] Setting addon gcp-auth=true in "addons-177998"
	I0815 00:40:16.602079 1405068 host.go:66] Checking if "addons-177998" exists ...
	I0815 00:40:16.602628 1405068 cli_runner.go:164] Run: docker container inspect addons-177998 --format={{.State.Status}}
	I0815 00:40:16.624748 1405068 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0815 00:40:16.624800 1405068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-177998
	I0815 00:40:16.644541 1405068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34600 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/addons-177998/id_rsa Username:docker}
	I0815 00:40:16.748889 1405068 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 00:40:16.751205 1405068 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0815 00:40:16.752782 1405068 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0815 00:40:16.752802 1405068 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0815 00:40:16.782425 1405068 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0815 00:40:16.782454 1405068 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0815 00:40:16.808091 1405068 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0815 00:40:16.808123 1405068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0815 00:40:16.833219 1405068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0815 00:40:17.049220 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:17.051538 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:17.071604 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:17.572142 1405068 addons.go:475] Verifying addon gcp-auth=true in "addons-177998"
	I0815 00:40:17.574681 1405068 out.go:177] * Verifying gcp-auth addon...
	I0815 00:40:17.577473 1405068 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0815 00:40:17.588309 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:17.592067 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:17.598824 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:17.688359 1405068 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0815 00:40:17.688385 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:18.052798 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:18.053500 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:18.071466 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:18.082695 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:18.387551 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:18.545072 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:18.547144 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:18.569650 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:18.581221 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:19.044712 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:19.047313 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:19.070519 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:19.081081 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:19.543613 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:19.545132 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:19.570215 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:19.582926 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:20.045732 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:20.046902 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:20.070478 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:20.083496 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:20.548972 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:20.550352 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:20.570426 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:20.581500 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:20.878907 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:21.044834 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:21.051405 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:21.073649 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:21.081516 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:21.542864 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:21.545503 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:21.569610 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:21.581197 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:22.043694 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:22.045777 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:22.070804 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:22.081421 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:22.544359 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:22.544950 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:22.569297 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:22.581303 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:23.043436 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:23.046336 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:23.069652 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:23.081428 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:23.379841 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:23.544047 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:23.545361 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:23.569388 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:23.580821 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:24.043183 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:24.046309 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:24.069901 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:24.081465 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:24.543494 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:24.546060 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:24.568941 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:24.582493 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:25.043269 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:25.045681 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:25.072050 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:25.080831 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:25.543293 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:25.546020 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:25.569023 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:25.581140 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:25.878064 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:26.045549 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:26.047565 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:26.069509 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:26.080801 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:26.544198 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:26.545070 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:26.569507 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:26.580544 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:27.044718 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:27.047451 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:27.069865 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:27.081244 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:27.545409 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:27.546501 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:27.569706 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:27.581338 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:27.878142 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:28.043909 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:28.045729 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:28.069867 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:28.081016 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:28.543108 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:28.544692 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:28.569258 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:28.580838 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:29.044895 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:29.047817 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:29.069577 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:29.081008 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:29.543836 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:29.545675 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:29.568991 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:29.581239 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:29.878281 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:30.045770 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:30.053561 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:30.075251 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:30.086241 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:30.542828 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:30.544621 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:30.570082 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:30.580561 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:31.043943 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:31.048045 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:31.071410 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:31.081038 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:31.543657 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:31.545011 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:31.569995 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:31.581445 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:31.878580 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:32.046192 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:32.046525 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:32.069891 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:32.080935 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:32.543375 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:32.546426 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:32.570152 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:32.581416 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:33.043218 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:33.046675 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:33.070278 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:33.081100 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:33.543387 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:33.545205 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:33.569233 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:33.581338 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:34.044526 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:34.045807 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:34.069878 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:34.081252 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:34.377559 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:34.543959 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:34.545570 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:34.569538 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:34.581085 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:35.043859 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:35.046427 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:35.069976 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:35.083020 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:35.543627 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:35.544946 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:35.571071 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:35.580635 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:36.045509 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:36.050566 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:36.070686 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:36.083491 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:36.378888 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:36.547052 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:36.550214 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:36.569914 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:36.581113 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:37.043670 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:37.060186 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:37.081817 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:37.088688 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:37.545248 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:37.546210 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:37.569573 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:37.580477 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:38.043554 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:38.045295 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:38.069923 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:38.082228 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:38.544211 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:38.547123 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:38.569422 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:38.580464 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:38.877850 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:39.044637 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:39.046997 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:39.069516 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:39.081441 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:39.544144 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:39.545668 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:39.569322 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:39.581434 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:40.046113 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:40.048072 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:40.069718 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:40.081093 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:40.543578 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:40.545238 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:40.569597 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:40.581175 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:40.878132 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:41.043945 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:41.046288 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:41.069569 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:41.081841 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:41.542667 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:41.545418 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:41.569591 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:41.581334 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:42.044204 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:42.046635 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:42.069456 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:42.082648 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:42.543343 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:42.546144 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:42.569203 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:42.581554 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:43.043860 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:43.045704 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:43.069239 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:43.080648 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:43.378046 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:43.543057 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:43.545571 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:43.569592 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:43.581110 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:44.043241 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:44.045935 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:44.069899 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:44.080977 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:44.543295 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:44.545448 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:44.569521 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:44.580764 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:45.054935 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:45.071278 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:45.090111 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:45.131394 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:45.383017 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:45.543173 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:45.544868 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:45.569744 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:45.581239 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:46.044056 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:46.045662 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:46.069127 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:46.080642 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:46.544175 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:46.545649 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:46.569785 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:46.580842 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:47.043434 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:47.046933 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:47.070084 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:47.080803 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:47.544502 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:47.546142 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:47.569377 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:47.581160 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:47.877977 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:48.044349 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:48.046541 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:48.069742 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:48.081689 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:48.543512 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:48.545005 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:48.569290 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:48.580816 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:49.043433 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:49.045535 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:49.070293 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:49.080998 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:49.543511 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:49.546720 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:49.569654 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:49.581330 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:49.878621 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:50.046504 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:50.047179 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:50.069531 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:50.084077 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:50.545774 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:50.546801 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:50.570061 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:50.580587 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:51.054186 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:51.055558 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:51.070015 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:51.081853 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:51.542910 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:51.544481 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:51.569830 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:51.580919 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:52.044111 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:52.044984 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:52.068999 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:52.081324 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:52.378269 1405068 node_ready.go:53] node "addons-177998" has status "Ready":"False"
	I0815 00:40:52.543519 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:52.545708 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:52.568932 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:52.581233 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:53.046647 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:53.047575 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:53.069833 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:53.080955 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:53.555318 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:53.556974 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:53.618618 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:53.621001 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:53.910808 1405068 node_ready.go:49] node "addons-177998" has status "Ready":"True"
	I0815 00:40:53.910885 1405068 node_ready.go:38] duration metric: took 42.536361271s for node "addons-177998" to be "Ready" ...
	I0815 00:40:53.910910 1405068 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 00:40:53.960183 1405068 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-pdg4h" in "kube-system" namespace to be "Ready" ...
	I0815 00:40:54.055985 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:54.061760 1405068 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0815 00:40:54.061833 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:54.086226 1405068 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0815 00:40:54.086309 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:54.098231 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:54.572156 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:54.573210 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:54.600568 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:54.600986 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:55.065030 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:55.066624 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:55.081546 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:55.083986 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:55.545514 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:55.546069 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:55.570779 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:55.581472 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:55.967821 1405068 pod_ready.go:92] pod "coredns-6f6b679f8f-pdg4h" in "kube-system" namespace has status "Ready":"True"
	I0815 00:40:55.967893 1405068 pod_ready.go:81] duration metric: took 2.007632437s for pod "coredns-6f6b679f8f-pdg4h" in "kube-system" namespace to be "Ready" ...
	I0815 00:40:55.967927 1405068 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-177998" in "kube-system" namespace to be "Ready" ...
	I0815 00:40:55.974166 1405068 pod_ready.go:92] pod "etcd-addons-177998" in "kube-system" namespace has status "Ready":"True"
	I0815 00:40:55.974189 1405068 pod_ready.go:81] duration metric: took 6.254821ms for pod "etcd-addons-177998" in "kube-system" namespace to be "Ready" ...
	I0815 00:40:55.974206 1405068 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-177998" in "kube-system" namespace to be "Ready" ...
	I0815 00:40:55.979850 1405068 pod_ready.go:92] pod "kube-apiserver-addons-177998" in "kube-system" namespace has status "Ready":"True"
	I0815 00:40:55.979948 1405068 pod_ready.go:81] duration metric: took 5.732979ms for pod "kube-apiserver-addons-177998" in "kube-system" namespace to be "Ready" ...
	I0815 00:40:55.979997 1405068 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-177998" in "kube-system" namespace to be "Ready" ...
	I0815 00:40:55.986844 1405068 pod_ready.go:92] pod "kube-controller-manager-addons-177998" in "kube-system" namespace has status "Ready":"True"
	I0815 00:40:55.986876 1405068 pod_ready.go:81] duration metric: took 6.862177ms for pod "kube-controller-manager-addons-177998" in "kube-system" namespace to be "Ready" ...
	I0815 00:40:55.986897 1405068 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5wktb" in "kube-system" namespace to be "Ready" ...
	I0815 00:40:55.992881 1405068 pod_ready.go:92] pod "kube-proxy-5wktb" in "kube-system" namespace has status "Ready":"True"
	I0815 00:40:55.992912 1405068 pod_ready.go:81] duration metric: took 6.004075ms for pod "kube-proxy-5wktb" in "kube-system" namespace to be "Ready" ...
	I0815 00:40:55.992924 1405068 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-177998" in "kube-system" namespace to be "Ready" ...
	I0815 00:40:56.043890 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:56.046892 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:56.071073 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:56.082447 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:56.364413 1405068 pod_ready.go:92] pod "kube-scheduler-addons-177998" in "kube-system" namespace has status "Ready":"True"
	I0815 00:40:56.364443 1405068 pod_ready.go:81] duration metric: took 371.510456ms for pod "kube-scheduler-addons-177998" in "kube-system" namespace to be "Ready" ...
	I0815 00:40:56.364455 1405068 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace to be "Ready" ...
	I0815 00:40:56.545857 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:56.545993 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:56.570094 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:56.580978 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:57.048530 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:57.050161 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:57.071675 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:57.081778 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:57.544860 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:57.547070 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:57.571456 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:57.581162 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:58.044840 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:58.051478 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:58.071099 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:58.081233 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:58.372217 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:40:58.544587 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:58.548136 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:58.572376 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:58.581046 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:59.043624 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:59.047132 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:59.070520 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:59.080831 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:40:59.544241 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:40:59.547164 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:40:59.571007 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:40:59.581046 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:00.129793 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:00.130775 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:00.132247 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:00.179560 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:00.389903 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:00.550370 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:00.552050 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:00.572716 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:00.592726 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:01.045539 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:01.049681 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:01.071583 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:01.081507 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:01.545596 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:01.546709 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:01.570905 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:01.581314 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:02.045362 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:02.046892 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:02.070984 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:02.081233 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:02.543894 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:02.546851 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:02.570825 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:02.580752 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:02.872205 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:03.068236 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:03.074604 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:03.085648 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:03.088368 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:03.546110 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:03.548758 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:03.581641 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:03.590718 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:04.044897 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:04.045890 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:04.070651 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:04.080990 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:04.553014 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:04.553517 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:04.607367 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:04.607475 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:05.048816 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:05.049043 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:05.087368 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:05.094310 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:05.373032 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:05.554829 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:05.556963 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:05.572651 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:05.586494 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:06.044553 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:06.046812 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:06.071814 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:06.084242 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:06.546345 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:06.549138 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:06.645407 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:06.647536 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:07.052140 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:07.055056 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:07.072309 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:07.082831 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:07.550759 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:07.556443 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:07.574048 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:07.582532 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:07.874664 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:08.045530 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:08.064058 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:08.081583 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:08.095166 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:08.558235 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:08.564800 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:08.581983 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:08.605671 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:09.047368 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:09.056231 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:09.085065 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:09.104080 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:09.548267 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:09.549882 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:09.572315 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:09.583788 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:10.051590 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:10.053859 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:10.072390 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:10.086131 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:10.374489 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:10.546451 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:10.547854 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:10.572053 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:10.582039 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:11.055508 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:11.056530 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:11.071459 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:11.080896 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:11.545096 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:11.546317 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:11.571931 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:11.581631 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:12.045873 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:12.048647 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:12.072588 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:12.084872 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:12.545717 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:12.548896 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:12.575417 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:12.581507 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:12.872662 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:13.048430 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:13.051269 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:13.072255 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:13.082456 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:13.547792 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:13.549633 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:13.572078 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:13.582335 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:14.047103 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:14.047855 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:14.070817 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:14.080537 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:14.543801 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:14.547991 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:14.573258 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:14.582538 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:14.874271 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:15.071762 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:15.075272 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:15.078799 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:15.081997 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:15.547505 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:15.548445 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:15.574291 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:15.584246 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:16.056218 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:16.057799 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:16.157319 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:16.157434 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:16.549609 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:16.550969 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:16.644955 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:16.646861 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:17.047375 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:17.049563 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:17.146994 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:17.147779 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:17.370495 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:17.543493 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:17.546163 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:17.571319 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:17.581680 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:18.046474 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:18.048426 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:18.071192 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:18.081785 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:18.545421 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:18.547090 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:18.570842 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:18.587015 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:19.044515 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:19.048343 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:19.071393 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:19.080556 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:19.371211 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:19.543813 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:19.545047 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:19.571006 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:19.580524 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:20.045335 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:20.046597 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:20.070642 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:20.080981 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:20.543560 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:20.546734 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:20.570330 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:20.582884 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:21.047705 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:21.058073 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:21.071313 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:21.081537 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:21.378616 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:21.546083 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:21.546844 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:21.577534 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:21.583870 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:22.046253 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:22.048272 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:22.072438 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:22.082968 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:22.546643 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:22.547662 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:22.570591 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:22.581058 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:23.046500 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:23.049264 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:23.071292 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:23.081164 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:23.544433 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:23.547021 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:23.571769 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:23.580995 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:23.873207 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:24.050328 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:24.054004 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:24.070858 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:24.085226 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:24.544557 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:24.546975 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:24.571630 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:24.581409 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:25.047809 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:25.049328 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:25.071534 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:25.082063 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:25.545466 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:25.549181 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:25.571973 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:25.581885 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:26.046955 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:26.048545 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:26.081651 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:26.098023 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:26.373130 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:26.543918 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:26.550407 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:26.572203 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:26.581703 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:27.044577 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:27.048015 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:27.072693 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:27.082048 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:27.545872 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:27.548137 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:27.571003 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:27.580825 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:28.045729 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:28.047228 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:28.070734 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:28.080772 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:28.545657 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:28.545833 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:28.570294 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:28.585018 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:28.871574 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:29.047029 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:29.051320 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:29.071659 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:29.081455 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:29.545470 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:29.548029 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:29.571697 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:29.581346 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:30.138301 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:30.141147 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:30.143034 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:30.144753 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:30.544859 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:30.546485 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:30.571351 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:30.581791 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:30.891417 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:31.046838 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:31.047693 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:31.070266 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:31.081087 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:31.544594 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:31.548174 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:31.573073 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:31.581603 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:32.043917 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:32.046666 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:32.070988 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:32.081254 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:32.543761 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:32.545855 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:32.570223 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:32.581465 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:33.047807 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:33.048863 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:33.070593 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:33.081449 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:33.372169 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:33.544481 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:33.547794 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:33.571322 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:33.581629 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:34.044653 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:34.052029 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:34.071300 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:34.081865 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:34.544609 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:34.551649 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:34.571161 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:34.581159 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:35.044582 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:35.047363 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:35.071573 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:35.081515 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:35.372639 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:35.553260 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:35.555709 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:35.575133 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:35.603154 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:36.063634 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:36.066057 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:36.076923 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:36.088907 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:36.544378 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:36.547174 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:36.571310 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:36.581520 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:37.052852 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:37.146297 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:37.146886 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:37.147492 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:37.544776 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:37.556076 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:37.570813 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:37.581248 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:37.896114 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:38.048502 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:38.056100 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:41:38.073952 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:38.091218 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:38.553604 1405068 kapi.go:107] duration metric: took 1m25.512277354s to wait for kubernetes.io/minikube-addons=registry ...
	I0815 00:41:38.554878 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:38.582709 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:38.584069 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:39.049990 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:39.072005 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:39.083372 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:39.545371 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:39.571560 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:39.581337 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:40.050425 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:40.073042 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:40.083996 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:40.372053 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:40.544067 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:40.571510 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:40.581248 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:41.044468 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:41.071933 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:41.081234 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:41.544559 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:41.570753 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:41.581551 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:42.054210 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:42.082779 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:42.107381 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:42.544328 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:42.571312 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:42.581312 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:42.871263 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:43.047750 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:43.072165 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:43.081883 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:43.545624 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:43.574064 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:43.581965 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:44.044330 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:44.074072 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:44.082444 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:44.546675 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:44.576639 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:44.581310 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:45.045212 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:45.071658 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:45.081801 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:45.378691 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:45.543669 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:45.571379 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:45.581020 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:46.043501 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:46.070655 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:46.081151 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:46.544372 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:46.571583 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:46.581372 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:47.047785 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:47.076651 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:47.087650 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:47.544019 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:47.574289 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:47.584440 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:47.871714 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:48.045120 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:48.071995 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:48.081798 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:48.544469 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:48.571801 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:48.581828 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:49.043960 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:49.145285 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:49.146030 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:49.548079 1405068 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:41:49.576482 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:49.583273 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:49.874059 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:50.047471 1405068 kapi.go:107] duration metric: took 1m37.008578884s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0815 00:41:50.077966 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:50.084418 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:50.572742 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:50.581962 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:51.079181 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:51.171044 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:51.571046 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:51.581620 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:41:52.071496 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:52.081980 1405068 kapi.go:107] duration metric: took 1m34.504502919s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0815 00:41:52.084211 1405068 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-177998 cluster.
	I0815 00:41:52.086281 1405068 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0815 00:41:52.088272 1405068 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0815 00:41:52.370279 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:52.570607 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:53.072080 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:53.575721 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:54.071621 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:54.371219 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:54.570124 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:55.071984 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:55.571010 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:56.070870 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:56.372648 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:56.571169 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:57.072562 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:57.570736 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:58.070927 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:58.571612 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:58.872397 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:41:59.071587 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:41:59.585776 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:42:00.085897 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:42:00.571922 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:42:01.072135 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:42:01.372871 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:42:01.572663 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:42:02.072025 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:42:02.570771 1405068 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:42:03.071823 1405068 kapi.go:107] duration metric: took 1m49.506131956s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0815 00:42:03.074203 1405068 out.go:177] * Enabled addons: storage-provisioner, nvidia-device-plugin, cloud-spanner, ingress-dns, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0815 00:42:03.076173 1405068 addons.go:510] duration metric: took 1m57.33619243s for enable addons: enabled=[storage-provisioner nvidia-device-plugin cloud-spanner ingress-dns metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0815 00:42:03.871053 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:42:06.370529 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:42:08.370631 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:42:10.370947 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:42:12.870963 1405068 pod_ready.go:102] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"False"
	I0815 00:42:14.370512 1405068 pod_ready.go:92] pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace has status "Ready":"True"
	I0815 00:42:14.370540 1405068 pod_ready.go:81] duration metric: took 1m18.006077671s for pod "metrics-server-8988944d9-rf2fb" in "kube-system" namespace to be "Ready" ...
	I0815 00:42:14.370552 1405068 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-7b7wb" in "kube-system" namespace to be "Ready" ...
	I0815 00:42:14.375899 1405068 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-7b7wb" in "kube-system" namespace has status "Ready":"True"
	I0815 00:42:14.375929 1405068 pod_ready.go:81] duration metric: took 5.369306ms for pod "nvidia-device-plugin-daemonset-7b7wb" in "kube-system" namespace to be "Ready" ...
	I0815 00:42:14.375950 1405068 pod_ready.go:38] duration metric: took 1m20.464996136s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 00:42:14.375967 1405068 api_server.go:52] waiting for apiserver process to appear ...
	I0815 00:42:14.375995 1405068 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 00:42:14.376058 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 00:42:14.429020 1405068 cri.go:89] found id: "dbe184e7f765be9ac66b9ce1c73b10856697e7599288f048bbd6ad3eec3068da"
	I0815 00:42:14.429045 1405068 cri.go:89] found id: ""
	I0815 00:42:14.429053 1405068 logs.go:276] 1 containers: [dbe184e7f765be9ac66b9ce1c73b10856697e7599288f048bbd6ad3eec3068da]
	I0815 00:42:14.429121 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:14.433118 1405068 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 00:42:14.433195 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 00:42:14.475366 1405068 cri.go:89] found id: "500f0254c56c2a8f7f1a170fe107172fb9ab322ff51810ac9f91a88bdfe576b5"
	I0815 00:42:14.475431 1405068 cri.go:89] found id: ""
	I0815 00:42:14.475446 1405068 logs.go:276] 1 containers: [500f0254c56c2a8f7f1a170fe107172fb9ab322ff51810ac9f91a88bdfe576b5]
	I0815 00:42:14.475503 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:14.479185 1405068 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 00:42:14.479258 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 00:42:14.517625 1405068 cri.go:89] found id: "a258c5a63a70feb900b39c806fdb70dfd1dac65dcd46950974a81f78fe6dd70e"
	I0815 00:42:14.517695 1405068 cri.go:89] found id: ""
	I0815 00:42:14.517718 1405068 logs.go:276] 1 containers: [a258c5a63a70feb900b39c806fdb70dfd1dac65dcd46950974a81f78fe6dd70e]
	I0815 00:42:14.517809 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:14.521568 1405068 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 00:42:14.521684 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 00:42:14.561522 1405068 cri.go:89] found id: "b5b63db1c68aad62aae49b33186fdad027076508c87358bb463048e4c65b78e1"
	I0815 00:42:14.561588 1405068 cri.go:89] found id: ""
	I0815 00:42:14.561607 1405068 logs.go:276] 1 containers: [b5b63db1c68aad62aae49b33186fdad027076508c87358bb463048e4c65b78e1]
	I0815 00:42:14.561699 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:14.565517 1405068 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 00:42:14.565636 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 00:42:14.604808 1405068 cri.go:89] found id: "1a5db7b9949210333ac5a2849cfadc4f569ec584aad9d6eca7abf101875a7215"
	I0815 00:42:14.604828 1405068 cri.go:89] found id: ""
	I0815 00:42:14.604836 1405068 logs.go:276] 1 containers: [1a5db7b9949210333ac5a2849cfadc4f569ec584aad9d6eca7abf101875a7215]
	I0815 00:42:14.604901 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:14.608375 1405068 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 00:42:14.608452 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 00:42:14.653579 1405068 cri.go:89] found id: "a7fc9ee7679f5dc3dbab1ba2d6296b173ac67b9478ee292fb4f606b343d35d55"
	I0815 00:42:14.653657 1405068 cri.go:89] found id: ""
	I0815 00:42:14.653680 1405068 logs.go:276] 1 containers: [a7fc9ee7679f5dc3dbab1ba2d6296b173ac67b9478ee292fb4f606b343d35d55]
	I0815 00:42:14.653764 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:14.657325 1405068 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 00:42:14.657405 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 00:42:14.702609 1405068 cri.go:89] found id: "5093e07d1185af4a5a222407ab3a29ff288ce841e90199c05cccc63f47d0fb5c"
	I0815 00:42:14.702672 1405068 cri.go:89] found id: ""
	I0815 00:42:14.702695 1405068 logs.go:276] 1 containers: [5093e07d1185af4a5a222407ab3a29ff288ce841e90199c05cccc63f47d0fb5c]
	I0815 00:42:14.702787 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:14.706509 1405068 logs.go:123] Gathering logs for kubelet ...
	I0815 00:42:14.706572 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0815 00:42:14.759733 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: W0815 00:40:53.597954    1525 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-177998" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-177998' and this object
	W0815 00:42:14.759979 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598006    1525 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	W0815 00:42:14.760167 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: W0815 00:40:53.598059    1525 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-177998" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-177998' and this object
	W0815 00:42:14.760397 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598076    1525 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	W0815 00:42:14.760583 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: W0815 00:40:53.598114    1525 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-177998" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-177998' and this object
	W0815 00:42:14.760811 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598127    1525 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	I0815 00:42:14.799389 1405068 logs.go:123] Gathering logs for dmesg ...
	I0815 00:42:14.799419 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 00:42:14.816992 1405068 logs.go:123] Gathering logs for describe nodes ...
	I0815 00:42:14.817020 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 00:42:15.068107 1405068 logs.go:123] Gathering logs for kube-apiserver [dbe184e7f765be9ac66b9ce1c73b10856697e7599288f048bbd6ad3eec3068da] ...
	I0815 00:42:15.068148 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe184e7f765be9ac66b9ce1c73b10856697e7599288f048bbd6ad3eec3068da"
	I0815 00:42:15.136937 1405068 logs.go:123] Gathering logs for kube-proxy [1a5db7b9949210333ac5a2849cfadc4f569ec584aad9d6eca7abf101875a7215] ...
	I0815 00:42:15.136982 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a5db7b9949210333ac5a2849cfadc4f569ec584aad9d6eca7abf101875a7215"
	I0815 00:42:15.180626 1405068 logs.go:123] Gathering logs for CRI-O ...
	I0815 00:42:15.180656 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 00:42:15.273850 1405068 logs.go:123] Gathering logs for etcd [500f0254c56c2a8f7f1a170fe107172fb9ab322ff51810ac9f91a88bdfe576b5] ...
	I0815 00:42:15.273887 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500f0254c56c2a8f7f1a170fe107172fb9ab322ff51810ac9f91a88bdfe576b5"
	I0815 00:42:15.342202 1405068 logs.go:123] Gathering logs for coredns [a258c5a63a70feb900b39c806fdb70dfd1dac65dcd46950974a81f78fe6dd70e] ...
	I0815 00:42:15.342236 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a258c5a63a70feb900b39c806fdb70dfd1dac65dcd46950974a81f78fe6dd70e"
	I0815 00:42:15.386086 1405068 logs.go:123] Gathering logs for kube-scheduler [b5b63db1c68aad62aae49b33186fdad027076508c87358bb463048e4c65b78e1] ...
	I0815 00:42:15.386120 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5b63db1c68aad62aae49b33186fdad027076508c87358bb463048e4c65b78e1"
	I0815 00:42:15.443093 1405068 logs.go:123] Gathering logs for kube-controller-manager [a7fc9ee7679f5dc3dbab1ba2d6296b173ac67b9478ee292fb4f606b343d35d55] ...
	I0815 00:42:15.443129 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7fc9ee7679f5dc3dbab1ba2d6296b173ac67b9478ee292fb4f606b343d35d55"
	I0815 00:42:15.512302 1405068 logs.go:123] Gathering logs for kindnet [5093e07d1185af4a5a222407ab3a29ff288ce841e90199c05cccc63f47d0fb5c] ...
	I0815 00:42:15.512337 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5093e07d1185af4a5a222407ab3a29ff288ce841e90199c05cccc63f47d0fb5c"
	I0815 00:42:15.562318 1405068 logs.go:123] Gathering logs for container status ...
	I0815 00:42:15.562352 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 00:42:15.614448 1405068 out.go:304] Setting ErrFile to fd 2...
	I0815 00:42:15.614517 1405068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0815 00:42:15.614595 1405068 out.go:239] X Problems detected in kubelet:
	W0815 00:42:15.614634 1405068 out.go:239]   Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598006    1525 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	W0815 00:42:15.614667 1405068 out.go:239]   Aug 15 00:40:53 addons-177998 kubelet[1525]: W0815 00:40:53.598059    1525 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-177998" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-177998' and this object
	W0815 00:42:15.614701 1405068 out.go:239]   Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598076    1525 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	W0815 00:42:15.614736 1405068 out.go:239]   Aug 15 00:40:53 addons-177998 kubelet[1525]: W0815 00:40:53.598114    1525 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-177998" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-177998' and this object
	W0815 00:42:15.614766 1405068 out.go:239]   Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598127    1525 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	I0815 00:42:15.614774 1405068 out.go:304] Setting ErrFile to fd 2...
	I0815 00:42:15.614781 1405068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:42:25.616126 1405068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:42:25.629976 1405068 api_server.go:72] duration metric: took 2m19.890286111s to wait for apiserver process to appear ...
	I0815 00:42:25.630002 1405068 api_server.go:88] waiting for apiserver healthz status ...
	I0815 00:42:25.630039 1405068 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 00:42:25.630100 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 00:42:25.668851 1405068 cri.go:89] found id: "dbe184e7f765be9ac66b9ce1c73b10856697e7599288f048bbd6ad3eec3068da"
	I0815 00:42:25.668871 1405068 cri.go:89] found id: ""
	I0815 00:42:25.668882 1405068 logs.go:276] 1 containers: [dbe184e7f765be9ac66b9ce1c73b10856697e7599288f048bbd6ad3eec3068da]
	I0815 00:42:25.668938 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:25.672472 1405068 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 00:42:25.672546 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 00:42:25.709902 1405068 cri.go:89] found id: "500f0254c56c2a8f7f1a170fe107172fb9ab322ff51810ac9f91a88bdfe576b5"
	I0815 00:42:25.709926 1405068 cri.go:89] found id: ""
	I0815 00:42:25.709934 1405068 logs.go:276] 1 containers: [500f0254c56c2a8f7f1a170fe107172fb9ab322ff51810ac9f91a88bdfe576b5]
	I0815 00:42:25.709993 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:25.713430 1405068 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 00:42:25.713502 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 00:42:25.759482 1405068 cri.go:89] found id: "a258c5a63a70feb900b39c806fdb70dfd1dac65dcd46950974a81f78fe6dd70e"
	I0815 00:42:25.759504 1405068 cri.go:89] found id: ""
	I0815 00:42:25.759522 1405068 logs.go:276] 1 containers: [a258c5a63a70feb900b39c806fdb70dfd1dac65dcd46950974a81f78fe6dd70e]
	I0815 00:42:25.759585 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:25.763155 1405068 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 00:42:25.763229 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 00:42:25.806127 1405068 cri.go:89] found id: "b5b63db1c68aad62aae49b33186fdad027076508c87358bb463048e4c65b78e1"
	I0815 00:42:25.806149 1405068 cri.go:89] found id: ""
	I0815 00:42:25.806157 1405068 logs.go:276] 1 containers: [b5b63db1c68aad62aae49b33186fdad027076508c87358bb463048e4c65b78e1]
	I0815 00:42:25.806211 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:25.812165 1405068 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 00:42:25.812237 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 00:42:25.851063 1405068 cri.go:89] found id: "1a5db7b9949210333ac5a2849cfadc4f569ec584aad9d6eca7abf101875a7215"
	I0815 00:42:25.851085 1405068 cri.go:89] found id: ""
	I0815 00:42:25.851093 1405068 logs.go:276] 1 containers: [1a5db7b9949210333ac5a2849cfadc4f569ec584aad9d6eca7abf101875a7215]
	I0815 00:42:25.851171 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:25.854823 1405068 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 00:42:25.854910 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 00:42:25.899528 1405068 cri.go:89] found id: "a7fc9ee7679f5dc3dbab1ba2d6296b173ac67b9478ee292fb4f606b343d35d55"
	I0815 00:42:25.899547 1405068 cri.go:89] found id: ""
	I0815 00:42:25.899555 1405068 logs.go:276] 1 containers: [a7fc9ee7679f5dc3dbab1ba2d6296b173ac67b9478ee292fb4f606b343d35d55]
	I0815 00:42:25.899618 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:25.903072 1405068 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 00:42:25.903145 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 00:42:25.941256 1405068 cri.go:89] found id: "5093e07d1185af4a5a222407ab3a29ff288ce841e90199c05cccc63f47d0fb5c"
	I0815 00:42:25.941278 1405068 cri.go:89] found id: ""
	I0815 00:42:25.941286 1405068 logs.go:276] 1 containers: [5093e07d1185af4a5a222407ab3a29ff288ce841e90199c05cccc63f47d0fb5c]
	I0815 00:42:25.941343 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:25.944645 1405068 logs.go:123] Gathering logs for coredns [a258c5a63a70feb900b39c806fdb70dfd1dac65dcd46950974a81f78fe6dd70e] ...
	I0815 00:42:25.944671 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a258c5a63a70feb900b39c806fdb70dfd1dac65dcd46950974a81f78fe6dd70e"
	I0815 00:42:25.988222 1405068 logs.go:123] Gathering logs for kube-proxy [1a5db7b9949210333ac5a2849cfadc4f569ec584aad9d6eca7abf101875a7215] ...
	I0815 00:42:25.988250 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a5db7b9949210333ac5a2849cfadc4f569ec584aad9d6eca7abf101875a7215"
	I0815 00:42:26.040158 1405068 logs.go:123] Gathering logs for kindnet [5093e07d1185af4a5a222407ab3a29ff288ce841e90199c05cccc63f47d0fb5c] ...
	I0815 00:42:26.040186 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5093e07d1185af4a5a222407ab3a29ff288ce841e90199c05cccc63f47d0fb5c"
	I0815 00:42:26.098079 1405068 logs.go:123] Gathering logs for CRI-O ...
	I0815 00:42:26.098113 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 00:42:26.197079 1405068 logs.go:123] Gathering logs for container status ...
	I0815 00:42:26.197116 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 00:42:26.245767 1405068 logs.go:123] Gathering logs for dmesg ...
	I0815 00:42:26.245795 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 00:42:26.262543 1405068 logs.go:123] Gathering logs for etcd [500f0254c56c2a8f7f1a170fe107172fb9ab322ff51810ac9f91a88bdfe576b5] ...
	I0815 00:42:26.262571 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500f0254c56c2a8f7f1a170fe107172fb9ab322ff51810ac9f91a88bdfe576b5"
	I0815 00:42:26.325374 1405068 logs.go:123] Gathering logs for kube-apiserver [dbe184e7f765be9ac66b9ce1c73b10856697e7599288f048bbd6ad3eec3068da] ...
	I0815 00:42:26.325409 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe184e7f765be9ac66b9ce1c73b10856697e7599288f048bbd6ad3eec3068da"
	I0815 00:42:26.404734 1405068 logs.go:123] Gathering logs for kube-scheduler [b5b63db1c68aad62aae49b33186fdad027076508c87358bb463048e4c65b78e1] ...
	I0815 00:42:26.404767 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5b63db1c68aad62aae49b33186fdad027076508c87358bb463048e4c65b78e1"
	I0815 00:42:26.456419 1405068 logs.go:123] Gathering logs for kube-controller-manager [a7fc9ee7679f5dc3dbab1ba2d6296b173ac67b9478ee292fb4f606b343d35d55] ...
	I0815 00:42:26.456453 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7fc9ee7679f5dc3dbab1ba2d6296b173ac67b9478ee292fb4f606b343d35d55"
	I0815 00:42:26.531299 1405068 logs.go:123] Gathering logs for kubelet ...
	I0815 00:42:26.531334 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0815 00:42:26.586269 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: W0815 00:40:53.597954    1525 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-177998" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-177998' and this object
	W0815 00:42:26.586549 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598006    1525 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	W0815 00:42:26.586741 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: W0815 00:40:53.598059    1525 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-177998" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-177998' and this object
	W0815 00:42:26.586971 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598076    1525 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	W0815 00:42:26.587160 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: W0815 00:40:53.598114    1525 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-177998" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-177998' and this object
	W0815 00:42:26.587390 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598127    1525 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	I0815 00:42:26.627402 1405068 logs.go:123] Gathering logs for describe nodes ...
	I0815 00:42:26.627434 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 00:42:26.775160 1405068 out.go:304] Setting ErrFile to fd 2...
	I0815 00:42:26.775188 1405068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0815 00:42:26.775263 1405068 out.go:239] X Problems detected in kubelet:
	W0815 00:42:26.775276 1405068 out.go:239]   Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598006    1525 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	W0815 00:42:26.775287 1405068 out.go:239]   Aug 15 00:40:53 addons-177998 kubelet[1525]: W0815 00:40:53.598059    1525 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-177998" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-177998' and this object
	W0815 00:42:26.775308 1405068 out.go:239]   Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598076    1525 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	W0815 00:42:26.775315 1405068 out.go:239]   Aug 15 00:40:53 addons-177998 kubelet[1525]: W0815 00:40:53.598114    1525 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-177998" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-177998' and this object
	W0815 00:42:26.775322 1405068 out.go:239]   Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598127    1525 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	I0815 00:42:26.775335 1405068 out.go:304] Setting ErrFile to fd 2...
	I0815 00:42:26.775341 1405068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:42:36.775911 1405068 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 00:42:36.783827 1405068 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0815 00:42:36.784954 1405068 api_server.go:141] control plane version: v1.31.0
	I0815 00:42:36.784989 1405068 api_server.go:131] duration metric: took 11.154979952s to wait for apiserver health ...
	I0815 00:42:36.784999 1405068 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 00:42:36.785022 1405068 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 00:42:36.785105 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 00:42:36.828315 1405068 cri.go:89] found id: "dbe184e7f765be9ac66b9ce1c73b10856697e7599288f048bbd6ad3eec3068da"
	I0815 00:42:36.828340 1405068 cri.go:89] found id: ""
	I0815 00:42:36.828350 1405068 logs.go:276] 1 containers: [dbe184e7f765be9ac66b9ce1c73b10856697e7599288f048bbd6ad3eec3068da]
	I0815 00:42:36.828406 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:36.832906 1405068 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 00:42:36.832986 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 00:42:36.875435 1405068 cri.go:89] found id: "500f0254c56c2a8f7f1a170fe107172fb9ab322ff51810ac9f91a88bdfe576b5"
	I0815 00:42:36.875455 1405068 cri.go:89] found id: ""
	I0815 00:42:36.875463 1405068 logs.go:276] 1 containers: [500f0254c56c2a8f7f1a170fe107172fb9ab322ff51810ac9f91a88bdfe576b5]
	I0815 00:42:36.875520 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:36.879077 1405068 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 00:42:36.879158 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 00:42:36.919070 1405068 cri.go:89] found id: "a258c5a63a70feb900b39c806fdb70dfd1dac65dcd46950974a81f78fe6dd70e"
	I0815 00:42:36.919093 1405068 cri.go:89] found id: ""
	I0815 00:42:36.919100 1405068 logs.go:276] 1 containers: [a258c5a63a70feb900b39c806fdb70dfd1dac65dcd46950974a81f78fe6dd70e]
	I0815 00:42:36.919158 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:36.922739 1405068 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 00:42:36.922824 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 00:42:36.960870 1405068 cri.go:89] found id: "b5b63db1c68aad62aae49b33186fdad027076508c87358bb463048e4c65b78e1"
	I0815 00:42:36.960893 1405068 cri.go:89] found id: ""
	I0815 00:42:36.960901 1405068 logs.go:276] 1 containers: [b5b63db1c68aad62aae49b33186fdad027076508c87358bb463048e4c65b78e1]
	I0815 00:42:36.960964 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:36.964534 1405068 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 00:42:36.964627 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 00:42:37.015395 1405068 cri.go:89] found id: "1a5db7b9949210333ac5a2849cfadc4f569ec584aad9d6eca7abf101875a7215"
	I0815 00:42:37.015429 1405068 cri.go:89] found id: ""
	I0815 00:42:37.015438 1405068 logs.go:276] 1 containers: [1a5db7b9949210333ac5a2849cfadc4f569ec584aad9d6eca7abf101875a7215]
	I0815 00:42:37.015512 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:37.020002 1405068 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 00:42:37.020128 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 00:42:37.076474 1405068 cri.go:89] found id: "a7fc9ee7679f5dc3dbab1ba2d6296b173ac67b9478ee292fb4f606b343d35d55"
	I0815 00:42:37.076539 1405068 cri.go:89] found id: ""
	I0815 00:42:37.076554 1405068 logs.go:276] 1 containers: [a7fc9ee7679f5dc3dbab1ba2d6296b173ac67b9478ee292fb4f606b343d35d55]
	I0815 00:42:37.076627 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:37.080229 1405068 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 00:42:37.080328 1405068 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 00:42:37.118486 1405068 cri.go:89] found id: "5093e07d1185af4a5a222407ab3a29ff288ce841e90199c05cccc63f47d0fb5c"
	I0815 00:42:37.118508 1405068 cri.go:89] found id: ""
	I0815 00:42:37.118517 1405068 logs.go:276] 1 containers: [5093e07d1185af4a5a222407ab3a29ff288ce841e90199c05cccc63f47d0fb5c]
	I0815 00:42:37.118578 1405068 ssh_runner.go:195] Run: which crictl
	I0815 00:42:37.123361 1405068 logs.go:123] Gathering logs for dmesg ...
	I0815 00:42:37.123388 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 00:42:37.141233 1405068 logs.go:123] Gathering logs for describe nodes ...
	I0815 00:42:37.141262 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 00:42:37.274363 1405068 logs.go:123] Gathering logs for kube-apiserver [dbe184e7f765be9ac66b9ce1c73b10856697e7599288f048bbd6ad3eec3068da] ...
	I0815 00:42:37.274413 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbe184e7f765be9ac66b9ce1c73b10856697e7599288f048bbd6ad3eec3068da"
	I0815 00:42:37.329315 1405068 logs.go:123] Gathering logs for etcd [500f0254c56c2a8f7f1a170fe107172fb9ab322ff51810ac9f91a88bdfe576b5] ...
	I0815 00:42:37.329346 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 500f0254c56c2a8f7f1a170fe107172fb9ab322ff51810ac9f91a88bdfe576b5"
	I0815 00:42:37.383343 1405068 logs.go:123] Gathering logs for CRI-O ...
	I0815 00:42:37.383374 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 00:42:37.482274 1405068 logs.go:123] Gathering logs for kindnet [5093e07d1185af4a5a222407ab3a29ff288ce841e90199c05cccc63f47d0fb5c] ...
	I0815 00:42:37.482312 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5093e07d1185af4a5a222407ab3a29ff288ce841e90199c05cccc63f47d0fb5c"
	I0815 00:42:37.545254 1405068 logs.go:123] Gathering logs for container status ...
	I0815 00:42:37.545287 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 00:42:37.594933 1405068 logs.go:123] Gathering logs for kubelet ...
	I0815 00:42:37.594965 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0815 00:42:37.645851 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: W0815 00:40:53.597954    1525 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-177998" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-177998' and this object
	W0815 00:42:37.646123 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598006    1525 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	W0815 00:42:37.646313 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: W0815 00:40:53.598059    1525 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-177998" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-177998' and this object
	W0815 00:42:37.646553 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598076    1525 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	W0815 00:42:37.646741 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: W0815 00:40:53.598114    1525 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-177998" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-177998' and this object
	W0815 00:42:37.646968 1405068 logs.go:138] Found kubelet problem: Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598127    1525 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	I0815 00:42:37.688469 1405068 logs.go:123] Gathering logs for coredns [a258c5a63a70feb900b39c806fdb70dfd1dac65dcd46950974a81f78fe6dd70e] ...
	I0815 00:42:37.688499 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a258c5a63a70feb900b39c806fdb70dfd1dac65dcd46950974a81f78fe6dd70e"
	I0815 00:42:37.735065 1405068 logs.go:123] Gathering logs for kube-scheduler [b5b63db1c68aad62aae49b33186fdad027076508c87358bb463048e4c65b78e1] ...
	I0815 00:42:37.735135 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5b63db1c68aad62aae49b33186fdad027076508c87358bb463048e4c65b78e1"
	I0815 00:42:37.788975 1405068 logs.go:123] Gathering logs for kube-proxy [1a5db7b9949210333ac5a2849cfadc4f569ec584aad9d6eca7abf101875a7215] ...
	I0815 00:42:37.789012 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a5db7b9949210333ac5a2849cfadc4f569ec584aad9d6eca7abf101875a7215"
	I0815 00:42:37.826310 1405068 logs.go:123] Gathering logs for kube-controller-manager [a7fc9ee7679f5dc3dbab1ba2d6296b173ac67b9478ee292fb4f606b343d35d55] ...
	I0815 00:42:37.826344 1405068 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7fc9ee7679f5dc3dbab1ba2d6296b173ac67b9478ee292fb4f606b343d35d55"
	I0815 00:42:37.900627 1405068 out.go:304] Setting ErrFile to fd 2...
	I0815 00:42:37.900660 1405068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0815 00:42:37.900723 1405068 out.go:239] X Problems detected in kubelet:
	W0815 00:42:37.900737 1405068 out.go:239]   Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598006    1525 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	W0815 00:42:37.900751 1405068 out.go:239]   Aug 15 00:40:53 addons-177998 kubelet[1525]: W0815 00:40:53.598059    1525 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-177998" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-177998' and this object
	W0815 00:42:37.900759 1405068 out.go:239]   Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598076    1525 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	W0815 00:42:37.900770 1405068 out.go:239]   Aug 15 00:40:53 addons-177998 kubelet[1525]: W0815 00:40:53.598114    1525 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-177998" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-177998' and this object
	W0815 00:42:37.900779 1405068 out.go:239]   Aug 15 00:40:53 addons-177998 kubelet[1525]: E0815 00:40:53.598127    1525 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-177998\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-177998' and this object" logger="UnhandledError"
	I0815 00:42:37.900786 1405068 out.go:304] Setting ErrFile to fd 2...
	I0815 00:42:37.900792 1405068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:42:47.920171 1405068 system_pods.go:59] 18 kube-system pods found
	I0815 00:42:47.920217 1405068 system_pods.go:61] "coredns-6f6b679f8f-pdg4h" [51767a84-0d40-4da1-924b-28e15407b138] Running
	I0815 00:42:47.920225 1405068 system_pods.go:61] "csi-hostpath-attacher-0" [28339244-9d98-4106-9481-245c68b0259c] Running
	I0815 00:42:47.920230 1405068 system_pods.go:61] "csi-hostpath-resizer-0" [030b9622-b512-430b-a968-a060d6533161] Running
	I0815 00:42:47.920235 1405068 system_pods.go:61] "csi-hostpathplugin-b9g9b" [d4802ea3-64b4-40db-8a57-c4ab43810472] Running
	I0815 00:42:47.920240 1405068 system_pods.go:61] "etcd-addons-177998" [29e3ecb7-e391-4c97-9e64-75907dddb196] Running
	I0815 00:42:47.920244 1405068 system_pods.go:61] "kindnet-slrd6" [420c6f3b-f588-4914-ad0c-5bedb94fb3e4] Running
	I0815 00:42:47.920250 1405068 system_pods.go:61] "kube-apiserver-addons-177998" [5e00f426-e2d4-459d-b4b3-0b3fc3009131] Running
	I0815 00:42:47.920255 1405068 system_pods.go:61] "kube-controller-manager-addons-177998" [1a764421-c392-4ef5-82f1-acee0a48e083] Running
	I0815 00:42:47.920260 1405068 system_pods.go:61] "kube-ingress-dns-minikube" [024edd96-4c4b-4440-a323-a9f32fe96019] Running
	I0815 00:42:47.920269 1405068 system_pods.go:61] "kube-proxy-5wktb" [7f98e909-5af9-4423-a14f-33f1ff0a5a08] Running
	I0815 00:42:47.920274 1405068 system_pods.go:61] "kube-scheduler-addons-177998" [ba4ed04a-90b5-4852-85fe-d7cf246020bc] Running
	I0815 00:42:47.920282 1405068 system_pods.go:61] "metrics-server-8988944d9-rf2fb" [727c86c4-3855-401b-98e3-b3bc46d8e36a] Running
	I0815 00:42:47.920287 1405068 system_pods.go:61] "nvidia-device-plugin-daemonset-7b7wb" [83483a1f-e9b5-416a-922d-45fe573a70cc] Running
	I0815 00:42:47.920293 1405068 system_pods.go:61] "registry-6fb4cdfc84-pjk6z" [8d5b9336-317e-46bc-aca7-c582ff9a713b] Running
	I0815 00:42:47.920297 1405068 system_pods.go:61] "registry-proxy-mhl5f" [ffcca5c8-f85a-422d-ae88-317ee7017802] Running
	I0815 00:42:47.920312 1405068 system_pods.go:61] "snapshot-controller-56fcc65765-5gn92" [960ee139-dbed-4d66-840e-a8e0e55578e3] Running
	I0815 00:42:47.920316 1405068 system_pods.go:61] "snapshot-controller-56fcc65765-fnpns" [0552d1c5-19ff-4f08-97d8-f16f0b6ff21f] Running
	I0815 00:42:47.920319 1405068 system_pods.go:61] "storage-provisioner" [c9d10c3f-3886-4a97-a23a-c59849cd617f] Running
	I0815 00:42:47.920326 1405068 system_pods.go:74] duration metric: took 11.135320218s to wait for pod list to return data ...
	I0815 00:42:47.920334 1405068 default_sa.go:34] waiting for default service account to be created ...
	I0815 00:42:47.922926 1405068 default_sa.go:45] found service account: "default"
	I0815 00:42:47.922953 1405068 default_sa.go:55] duration metric: took 2.609696ms for default service account to be created ...
	I0815 00:42:47.922962 1405068 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 00:42:47.933567 1405068 system_pods.go:86] 18 kube-system pods found
	I0815 00:42:47.933603 1405068 system_pods.go:89] "coredns-6f6b679f8f-pdg4h" [51767a84-0d40-4da1-924b-28e15407b138] Running
	I0815 00:42:47.933610 1405068 system_pods.go:89] "csi-hostpath-attacher-0" [28339244-9d98-4106-9481-245c68b0259c] Running
	I0815 00:42:47.933638 1405068 system_pods.go:89] "csi-hostpath-resizer-0" [030b9622-b512-430b-a968-a060d6533161] Running
	I0815 00:42:47.933649 1405068 system_pods.go:89] "csi-hostpathplugin-b9g9b" [d4802ea3-64b4-40db-8a57-c4ab43810472] Running
	I0815 00:42:47.933654 1405068 system_pods.go:89] "etcd-addons-177998" [29e3ecb7-e391-4c97-9e64-75907dddb196] Running
	I0815 00:42:47.933659 1405068 system_pods.go:89] "kindnet-slrd6" [420c6f3b-f588-4914-ad0c-5bedb94fb3e4] Running
	I0815 00:42:47.933668 1405068 system_pods.go:89] "kube-apiserver-addons-177998" [5e00f426-e2d4-459d-b4b3-0b3fc3009131] Running
	I0815 00:42:47.933673 1405068 system_pods.go:89] "kube-controller-manager-addons-177998" [1a764421-c392-4ef5-82f1-acee0a48e083] Running
	I0815 00:42:47.933678 1405068 system_pods.go:89] "kube-ingress-dns-minikube" [024edd96-4c4b-4440-a323-a9f32fe96019] Running
	I0815 00:42:47.933689 1405068 system_pods.go:89] "kube-proxy-5wktb" [7f98e909-5af9-4423-a14f-33f1ff0a5a08] Running
	I0815 00:42:47.933693 1405068 system_pods.go:89] "kube-scheduler-addons-177998" [ba4ed04a-90b5-4852-85fe-d7cf246020bc] Running
	I0815 00:42:47.933697 1405068 system_pods.go:89] "metrics-server-8988944d9-rf2fb" [727c86c4-3855-401b-98e3-b3bc46d8e36a] Running
	I0815 00:42:47.933728 1405068 system_pods.go:89] "nvidia-device-plugin-daemonset-7b7wb" [83483a1f-e9b5-416a-922d-45fe573a70cc] Running
	I0815 00:42:47.933776 1405068 system_pods.go:89] "registry-6fb4cdfc84-pjk6z" [8d5b9336-317e-46bc-aca7-c582ff9a713b] Running
	I0815 00:42:47.933780 1405068 system_pods.go:89] "registry-proxy-mhl5f" [ffcca5c8-f85a-422d-ae88-317ee7017802] Running
	I0815 00:42:47.933784 1405068 system_pods.go:89] "snapshot-controller-56fcc65765-5gn92" [960ee139-dbed-4d66-840e-a8e0e55578e3] Running
	I0815 00:42:47.933788 1405068 system_pods.go:89] "snapshot-controller-56fcc65765-fnpns" [0552d1c5-19ff-4f08-97d8-f16f0b6ff21f] Running
	I0815 00:42:47.933792 1405068 system_pods.go:89] "storage-provisioner" [c9d10c3f-3886-4a97-a23a-c59849cd617f] Running
	I0815 00:42:47.933812 1405068 system_pods.go:126] duration metric: took 10.844228ms to wait for k8s-apps to be running ...
	I0815 00:42:47.933826 1405068 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 00:42:47.933898 1405068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:42:47.946529 1405068 system_svc.go:56] duration metric: took 12.693281ms WaitForService to wait for kubelet
	I0815 00:42:47.946555 1405068 kubeadm.go:582] duration metric: took 2m42.206870888s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 00:42:47.946576 1405068 node_conditions.go:102] verifying NodePressure condition ...
	I0815 00:42:47.950166 1405068 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0815 00:42:47.950202 1405068 node_conditions.go:123] node cpu capacity is 2
	I0815 00:42:47.950214 1405068 node_conditions.go:105] duration metric: took 3.633114ms to run NodePressure ...
	I0815 00:42:47.950244 1405068 start.go:241] waiting for startup goroutines ...
	I0815 00:42:47.950262 1405068 start.go:246] waiting for cluster config update ...
	I0815 00:42:47.950278 1405068 start.go:255] writing updated cluster config ...
	I0815 00:42:47.950608 1405068 ssh_runner.go:195] Run: rm -f paused
	I0815 00:42:48.300155 1405068 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 00:42:48.302533 1405068 out.go:177] * Done! kubectl is now configured to use "addons-177998" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 15 00:47:03 addons-177998 crio[967]: time="2024-08-15 00:47:03.298904605Z" level=info msg="Removed container 539c31dc78280b7d73f05327ac25cd7120358bd5c2f5b9d053932556fcf035fd: ingress-nginx/ingress-nginx-admission-patch-dnr48/patch" id=9eb38b27-cb3f-4793-8aff-4e84a40e9c2d name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 15 00:47:03 addons-177998 crio[967]: time="2024-08-15 00:47:03.300217498Z" level=info msg="Removing container: bcea413941280b2cd4ad23e33050d642156223442219f86f4843c1508037f3e8" id=b4c0b8af-a366-4d13-96e3-2630dd26820f name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 15 00:47:03 addons-177998 crio[967]: time="2024-08-15 00:47:03.317823146Z" level=info msg="Removed container bcea413941280b2cd4ad23e33050d642156223442219f86f4843c1508037f3e8: ingress-nginx/ingress-nginx-admission-create-kgc9j/create" id=b4c0b8af-a366-4d13-96e3-2630dd26820f name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 15 00:47:03 addons-177998 crio[967]: time="2024-08-15 00:47:03.319246200Z" level=info msg="Stopping pod sandbox: 920723acc74a4845db34e4b0e24f1c1cb1417c82150007ff6f6fdc234bc01513" id=b8f2fbe0-8567-40b5-9fdc-20b0dfe3fcaa name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 00:47:03 addons-177998 crio[967]: time="2024-08-15 00:47:03.319286438Z" level=info msg="Stopped pod sandbox (already stopped): 920723acc74a4845db34e4b0e24f1c1cb1417c82150007ff6f6fdc234bc01513" id=b8f2fbe0-8567-40b5-9fdc-20b0dfe3fcaa name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 00:47:03 addons-177998 crio[967]: time="2024-08-15 00:47:03.319620786Z" level=info msg="Removing pod sandbox: 920723acc74a4845db34e4b0e24f1c1cb1417c82150007ff6f6fdc234bc01513" id=ac38987e-e585-4784-88a5-7ae1f7f27374 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 15 00:47:03 addons-177998 crio[967]: time="2024-08-15 00:47:03.330795606Z" level=info msg="Removed pod sandbox: 920723acc74a4845db34e4b0e24f1c1cb1417c82150007ff6f6fdc234bc01513" id=ac38987e-e585-4784-88a5-7ae1f7f27374 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 15 00:47:03 addons-177998 crio[967]: time="2024-08-15 00:47:03.331421775Z" level=info msg="Stopping pod sandbox: 572f2fef0d790a0b8c171b42dc9ba1dcfe618130405e40b6e472edc1fd1d76a4" id=596d0a1f-ccd9-4ec7-9765-2d3b01f02966 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 00:47:03 addons-177998 crio[967]: time="2024-08-15 00:47:03.331470070Z" level=info msg="Stopped pod sandbox (already stopped): 572f2fef0d790a0b8c171b42dc9ba1dcfe618130405e40b6e472edc1fd1d76a4" id=596d0a1f-ccd9-4ec7-9765-2d3b01f02966 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 00:47:03 addons-177998 crio[967]: time="2024-08-15 00:47:03.331781937Z" level=info msg="Removing pod sandbox: 572f2fef0d790a0b8c171b42dc9ba1dcfe618130405e40b6e472edc1fd1d76a4" id=52f1ac3e-f720-4422-8b16-071dd23b846b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 15 00:47:03 addons-177998 crio[967]: time="2024-08-15 00:47:03.339896389Z" level=info msg="Removed pod sandbox: 572f2fef0d790a0b8c171b42dc9ba1dcfe618130405e40b6e472edc1fd1d76a4" id=52f1ac3e-f720-4422-8b16-071dd23b846b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 15 00:47:03 addons-177998 crio[967]: time="2024-08-15 00:47:03.340359803Z" level=info msg="Stopping pod sandbox: 648f463361557c6e6161f2ac609dd2f06c788db049b78d4ff88571a29407809f" id=99d47b14-8a10-4bec-8a3a-b9fae2fb7821 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 00:47:03 addons-177998 crio[967]: time="2024-08-15 00:47:03.340397283Z" level=info msg="Stopped pod sandbox (already stopped): 648f463361557c6e6161f2ac609dd2f06c788db049b78d4ff88571a29407809f" id=99d47b14-8a10-4bec-8a3a-b9fae2fb7821 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 00:47:03 addons-177998 crio[967]: time="2024-08-15 00:47:03.341431623Z" level=info msg="Removing pod sandbox: 648f463361557c6e6161f2ac609dd2f06c788db049b78d4ff88571a29407809f" id=eaf3f84f-94ba-4015-9c83-44860c923ead name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 15 00:47:03 addons-177998 crio[967]: time="2024-08-15 00:47:03.350672597Z" level=info msg="Removed pod sandbox: 648f463361557c6e6161f2ac609dd2f06c788db049b78d4ff88571a29407809f" id=eaf3f84f-94ba-4015-9c83-44860c923ead name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 15 00:47:03 addons-177998 crio[967]: time="2024-08-15 00:47:03.351153595Z" level=info msg="Stopping pod sandbox: 21bbefa25b8494c09d72f7f6c558b52c24c26c1bf091327439788234a25f485e" id=6976dfbe-cc0b-439a-bf33-7edc75d2f811 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 00:47:03 addons-177998 crio[967]: time="2024-08-15 00:47:03.351191207Z" level=info msg="Stopped pod sandbox (already stopped): 21bbefa25b8494c09d72f7f6c558b52c24c26c1bf091327439788234a25f485e" id=6976dfbe-cc0b-439a-bf33-7edc75d2f811 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 00:47:03 addons-177998 crio[967]: time="2024-08-15 00:47:03.351464985Z" level=info msg="Removing pod sandbox: 21bbefa25b8494c09d72f7f6c558b52c24c26c1bf091327439788234a25f485e" id=ac778eaa-8c85-4d85-bee3-4a4bf508dd7e name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 15 00:47:03 addons-177998 crio[967]: time="2024-08-15 00:47:03.360244458Z" level=info msg="Removed pod sandbox: 21bbefa25b8494c09d72f7f6c558b52c24c26c1bf091327439788234a25f485e" id=ac778eaa-8c85-4d85-bee3-4a4bf508dd7e name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 15 00:49:50 addons-177998 crio[967]: time="2024-08-15 00:49:50.473188914Z" level=info msg="Stopping container: 0dd58915670bf72feb1a25ee6d065355e4dce6f0bc568875ad06eb0baae11ead (timeout: 30s)" id=5781a656-5ae2-4c59-bb04-62f8a74eac25 name=/runtime.v1.RuntimeService/StopContainer
	Aug 15 00:49:51 addons-177998 crio[967]: time="2024-08-15 00:49:51.639943257Z" level=info msg="Stopped container 0dd58915670bf72feb1a25ee6d065355e4dce6f0bc568875ad06eb0baae11ead: kube-system/metrics-server-8988944d9-rf2fb/metrics-server" id=5781a656-5ae2-4c59-bb04-62f8a74eac25 name=/runtime.v1.RuntimeService/StopContainer
	Aug 15 00:49:51 addons-177998 crio[967]: time="2024-08-15 00:49:51.640862783Z" level=info msg="Stopping pod sandbox: ed4ef1e3504ad5331a43df5d3018f1ba9a3ea022a7250a190db8e285418f69d8" id=5a956d7e-0ad8-43da-a501-963eeb5e3ab3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 00:49:51 addons-177998 crio[967]: time="2024-08-15 00:49:51.641175092Z" level=info msg="Got pod network &{Name:metrics-server-8988944d9-rf2fb Namespace:kube-system ID:ed4ef1e3504ad5331a43df5d3018f1ba9a3ea022a7250a190db8e285418f69d8 UID:727c86c4-3855-401b-98e3-b3bc46d8e36a NetNS:/var/run/netns/70436106-8d4d-4e68-99d2-4e4ec5e81a06 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 15 00:49:51 addons-177998 crio[967]: time="2024-08-15 00:49:51.641540636Z" level=info msg="Deleting pod kube-system_metrics-server-8988944d9-rf2fb from CNI network \"kindnet\" (type=ptp)"
	Aug 15 00:49:51 addons-177998 crio[967]: time="2024-08-15 00:49:51.691496189Z" level=info msg="Stopped pod sandbox: ed4ef1e3504ad5331a43df5d3018f1ba9a3ea022a7250a190db8e285418f69d8" id=5a956d7e-0ad8-43da-a501-963eeb5e3ab3 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a09856c4982d1       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   0934939c30064       hello-world-app-55bf9c44b4-nh2g8
	5a73034614771       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         5 minutes ago       Running             nginx                     0                   73ef8fbd082c5       nginx
	feaaac1e3b1ae       ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971                   5 minutes ago       Running             headlamp                  0                   fd824314340ff       headlamp-57fb76fcdb-zzp8q
	de6512a94c68d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     7 minutes ago       Running             busybox                   0                   4d00b257068f6       busybox
	1e63db71930ff       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98        8 minutes ago       Running             local-path-provisioner    0                   0f4d8a768bf34       local-path-provisioner-86d989889c-28bw5
	0dd58915670bf       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70   8 minutes ago       Exited              metrics-server            0                   ed4ef1e3504ad       metrics-server-8988944d9-rf2fb
	a258c5a63a70f       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                        8 minutes ago       Running             coredns                   0                   e01e64c1da527       coredns-6f6b679f8f-pdg4h
	f60049867c20f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        8 minutes ago       Running             storage-provisioner       0                   8435130c9b79f       storage-provisioner
	5093e07d1185a       docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64                      9 minutes ago       Running             kindnet-cni               0                   4fce3aedd839d       kindnet-slrd6
	1a5db7b994921       71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89                                                        9 minutes ago       Running             kube-proxy                0                   3bb2a1c47b619       kube-proxy-5wktb
	500f0254c56c2       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                        9 minutes ago       Running             etcd                      0                   5c7bbe52fd00b       etcd-addons-177998
	dbe184e7f765b       cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388                                                        9 minutes ago       Running             kube-apiserver            0                   aea7e54a7181f       kube-apiserver-addons-177998
	b5b63db1c68aa       fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb                                                        9 minutes ago       Running             kube-scheduler            0                   9868558830c36       kube-scheduler-addons-177998
	a7fc9ee7679f5       fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd                                                        9 minutes ago       Running             kube-controller-manager   0                   63a6290e97bea       kube-controller-manager-addons-177998
	
	
	==> coredns [a258c5a63a70feb900b39c806fdb70dfd1dac65dcd46950974a81f78fe6dd70e] <==
	[INFO] 10.244.0.15:41126 - 13082 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002907416s
	[INFO] 10.244.0.15:51728 - 45599 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00017851s
	[INFO] 10.244.0.15:51728 - 43802 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000121222s
	[INFO] 10.244.0.15:33352 - 19452 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000107675s
	[INFO] 10.244.0.15:33352 - 43745 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000210361s
	[INFO] 10.244.0.15:53750 - 21595 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000059569s
	[INFO] 10.244.0.15:53750 - 42073 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00003703s
	[INFO] 10.244.0.15:41220 - 3637 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000046317s
	[INFO] 10.244.0.15:41220 - 58167 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000123954s
	[INFO] 10.244.0.15:34437 - 28038 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001607767s
	[INFO] 10.244.0.15:34437 - 48516 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001817865s
	[INFO] 10.244.0.15:40041 - 32770 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000061144s
	[INFO] 10.244.0.15:40041 - 27649 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000042141s
	[INFO] 10.244.0.20:38247 - 29419 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000236142s
	[INFO] 10.244.0.20:38516 - 44887 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000170394s
	[INFO] 10.244.0.20:46368 - 19249 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000161517s
	[INFO] 10.244.0.20:33110 - 115 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000149086s
	[INFO] 10.244.0.20:36775 - 28463 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000144557s
	[INFO] 10.244.0.20:52840 - 10409 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00010459s
	[INFO] 10.244.0.20:45939 - 58597 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002100718s
	[INFO] 10.244.0.20:57186 - 24916 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001846067s
	[INFO] 10.244.0.20:42101 - 10131 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001080353s
	[INFO] 10.244.0.20:47465 - 6664 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001466015s
	[INFO] 10.244.0.23:37402 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000296236s
	[INFO] 10.244.0.23:49096 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000138256s
	
	
	==> describe nodes <==
	Name:               addons-177998
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-177998
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=addons-177998
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T00_40_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-177998
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 00:39:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-177998
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 00:49:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 00:47:10 +0000   Thu, 15 Aug 2024 00:39:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 00:47:10 +0000   Thu, 15 Aug 2024 00:39:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 00:47:10 +0000   Thu, 15 Aug 2024 00:39:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 00:47:10 +0000   Thu, 15 Aug 2024 00:40:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-177998
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022372Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022372Ki
	  pods:               110
	System Info:
	  Machine ID:                 c3af8476b7464292ae80b608fd543d32
	  System UUID:                0a13cbd1-f040-4763-85c6-5dd9afda65d5
	  Boot ID:                    a45aa34f-c9ce-4e83-8881-7d8273e4eb81
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m3s
	  default                     hello-world-app-55bf9c44b4-nh2g8           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m10s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m29s
	  headlamp                    headlamp-57fb76fcdb-zzp8q                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m52s
	  kube-system                 coredns-6f6b679f8f-pdg4h                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     9m44s
	  kube-system                 etcd-addons-177998                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         9m50s
	  kube-system                 kindnet-slrd6                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9m44s
	  kube-system                 kube-apiserver-addons-177998               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m50s
	  kube-system                 kube-controller-manager-addons-177998      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m50s
	  kube-system                 kube-proxy-5wktb                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m44s
	  kube-system                 kube-scheduler-addons-177998               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m50s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m40s
	  local-path-storage          local-path-provisioner-86d989889c-28bw5    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 9m39s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  9m59s (x8 over 9m59s)  kubelet          Node addons-177998 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m59s (x8 over 9m59s)  kubelet          Node addons-177998 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m59s (x7 over 9m59s)  kubelet          Node addons-177998 status is now: NodeHasSufficientPID
	  Normal   Starting                 9m52s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m52s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  9m51s                  kubelet          Node addons-177998 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m51s                  kubelet          Node addons-177998 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m51s                  kubelet          Node addons-177998 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m47s                  node-controller  Node addons-177998 event: Registered Node addons-177998 in Controller
	  Normal   NodeReady                8m59s                  kubelet          Node addons-177998 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug14 22:01] overlayfs: '/var/lib/containers/storage/overlay/l/ZLTOCNGE2IGM6DT7VP2QP7OV3M' not a directory
	[Aug15 00:11] overlayfs: '/var/lib/containers/storage/overlay/l/ZLTOCNGE2IGM6DT7VP2QP7OV3M' not a directory
	[  +0.606282] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [500f0254c56c2a8f7f1a170fe107172fb9ab322ff51810ac9f91a88bdfe576b5] <==
	{"level":"warn","ts":"2024-08-15T00:40:09.412527Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"339.487998ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T00:40:09.412587Z","caller":"traceutil/trace.go:171","msg":"trace[394224514] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions; range_end:; response_count:0; response_revision:369; }","duration":"339.568341ms","start":"2024-08-15T00:40:09.073007Z","end":"2024-08-15T00:40:09.412575Z","steps":["trace[394224514] 'agreement among raft nodes before linearized reading'  (duration: 339.434049ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:40:09.412978Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T00:40:09.072968Z","time spent":"339.994636ms","remote":"127.0.0.1:39626","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":0,"response size":29,"request content":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" limit:1 "}
	{"level":"info","ts":"2024-08-15T00:40:09.549676Z","caller":"traceutil/trace.go:171","msg":"trace[1584218464] transaction","detail":"{read_only:false; response_revision:370; number_of_response:1; }","duration":"242.901065ms","start":"2024-08-15T00:40:09.306756Z","end":"2024-08-15T00:40:09.549657Z","steps":["trace[1584218464] 'process raft request'  (duration: 218.206825ms)","trace[1584218464] 'compare'  (duration: 24.543013ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T00:40:09.563494Z","caller":"traceutil/trace.go:171","msg":"trace[1622755248] transaction","detail":"{read_only:false; response_revision:371; number_of_response:1; }","duration":"250.582814ms","start":"2024-08-15T00:40:09.312890Z","end":"2024-08-15T00:40:09.563473Z","steps":["trace[1622755248] 'process raft request'  (duration: 236.72453ms)","trace[1622755248] 'attach lease to kv pair' {req_type:put; key:/registry/daemonsets/kube-system/kube-proxy; req_size:2860; } (duration: 13.569828ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T00:40:09.564136Z","caller":"traceutil/trace.go:171","msg":"trace[88815966] transaction","detail":"{read_only:false; response_revision:372; number_of_response:1; }","duration":"251.139364ms","start":"2024-08-15T00:40:09.312987Z","end":"2024-08-15T00:40:09.564126Z","steps":["trace[88815966] 'process raft request'  (duration: 250.293339ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:40:09.569059Z","caller":"traceutil/trace.go:171","msg":"trace[1828840549] transaction","detail":"{read_only:false; response_revision:373; number_of_response:1; }","duration":"256.009464ms","start":"2024-08-15T00:40:09.313035Z","end":"2024-08-15T00:40:09.569044Z","steps":["trace[1828840549] 'process raft request'  (duration: 250.898545ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:40:09.569462Z","caller":"traceutil/trace.go:171","msg":"trace[474701671] transaction","detail":"{read_only:false; response_revision:374; number_of_response:1; }","duration":"230.890513ms","start":"2024-08-15T00:40:09.338562Z","end":"2024-08-15T00:40:09.569452Z","steps":["trace[474701671] 'process raft request'  (duration: 225.532957ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:40:09.575394Z","caller":"traceutil/trace.go:171","msg":"trace[1076175795] transaction","detail":"{read_only:false; response_revision:375; number_of_response:1; }","duration":"236.633658ms","start":"2024-08-15T00:40:09.338746Z","end":"2024-08-15T00:40:09.575380Z","steps":["trace[1076175795] 'process raft request'  (duration: 230.66722ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:40:09.575769Z","caller":"traceutil/trace.go:171","msg":"trace[653685191] linearizableReadLoop","detail":"{readStateIndex:384; appliedIndex:378; }","duration":"163.356699ms","start":"2024-08-15T00:40:09.412402Z","end":"2024-08-15T00:40:09.575759Z","steps":["trace[653685191] 'read index received'  (duration: 112.283307ms)","trace[653685191] 'applied index is now lower than readState.Index'  (duration: 51.072506ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T00:40:09.575920Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.378686ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/yakd-dashboard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T00:40:09.575975Z","caller":"traceutil/trace.go:171","msg":"trace[766774977] range","detail":"{range_begin:/registry/namespaces/yakd-dashboard; range_end:; response_count:0; response_revision:375; }","duration":"237.450678ms","start":"2024-08-15T00:40:09.338515Z","end":"2024-08-15T00:40:09.575965Z","steps":["trace[766774977] 'agreement among raft nodes before linearized reading'  (duration: 237.359659ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:40:09.576169Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.668292ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T00:40:09.576235Z","caller":"traceutil/trace.go:171","msg":"trace[1145682486] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:0; response_revision:375; }","duration":"237.723463ms","start":"2024-08-15T00:40:09.338492Z","end":"2024-08-15T00:40:09.576215Z","steps":["trace[1145682486] 'agreement among raft nodes before linearized reading'  (duration: 237.643603ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:40:09.576431Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"269.71557ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-08-15T00:40:09.576500Z","caller":"traceutil/trace.go:171","msg":"trace[85294301] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:375; }","duration":"269.785845ms","start":"2024-08-15T00:40:09.306706Z","end":"2024-08-15T00:40:09.576491Z","steps":["trace[85294301] 'agreement among raft nodes before linearized reading'  (duration: 269.692866ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:40:09.576669Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"389.060986ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T00:40:09.576734Z","caller":"traceutil/trace.go:171","msg":"trace[1742780218] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/minikube-ingress-dns; range_end:; response_count:0; response_revision:375; }","duration":"389.118873ms","start":"2024-08-15T00:40:09.187597Z","end":"2024-08-15T00:40:09.576716Z","steps":["trace[1742780218] 'agreement among raft nodes before linearized reading'  (duration: 389.032998ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:40:09.576780Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T00:40:09.187581Z","time spent":"389.191093ms","remote":"127.0.0.1:39796","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":0,"response size":29,"request content":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" "}
	{"level":"warn","ts":"2024-08-15T00:40:09.587749Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"400.334698ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-slrd6\" ","response":"range_response_count:1 size:3689"}
	{"level":"info","ts":"2024-08-15T00:40:09.587904Z","caller":"traceutil/trace.go:171","msg":"trace[367980808] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-slrd6; range_end:; response_count:1; response_revision:375; }","duration":"400.49935ms","start":"2024-08-15T00:40:09.187390Z","end":"2024-08-15T00:40:09.587890Z","steps":["trace[367980808] 'agreement among raft nodes before linearized reading'  (duration: 400.25835ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:40:09.587985Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T00:40:09.187352Z","time spent":"400.623336ms","remote":"127.0.0.1:39766","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":1,"response size":3713,"request content":"key:\"/registry/pods/kube-system/kindnet-slrd6\" "}
	{"level":"warn","ts":"2024-08-15T00:40:09.588818Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"515.756501ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T00:40:09.588951Z","caller":"traceutil/trace.go:171","msg":"trace[1304732980] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:375; }","duration":"515.893206ms","start":"2024-08-15T00:40:09.073046Z","end":"2024-08-15T00:40:09.588939Z","steps":["trace[1304732980] 'agreement among raft nodes before linearized reading'  (duration: 515.717158ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:40:09.589019Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T00:40:09.073035Z","time spent":"515.963244ms","remote":"127.0.0.1:39796","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":29,"request content":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" "}
	
	
	==> kernel <==
	 00:49:52 up  9:32,  0 users,  load average: 0.24, 0.75, 1.71
	Linux addons-177998 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [5093e07d1185af4a5a222407ab3a29ff288ce841e90199c05cccc63f47d0fb5c] <==
	I0815 00:48:33.039605       1 main.go:299] handling current node
	I0815 00:48:43.040001       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:48:43.040123       1 main.go:299] handling current node
	W0815 00:48:46.091407       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0815 00:48:46.091439       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0815 00:48:53.039769       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:48:53.039900       1 main.go:299] handling current node
	W0815 00:48:54.906626       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0815 00:48:54.906662       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0815 00:48:58.649187       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 00:48:58.649218       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0815 00:49:03.039793       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:49:03.039954       1 main.go:299] handling current node
	I0815 00:49:13.040341       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:49:13.040398       1 main.go:299] handling current node
	I0815 00:49:23.039880       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:49:23.039920       1 main.go:299] handling current node
	I0815 00:49:33.040375       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:49:33.040504       1 main.go:299] handling current node
	W0815 00:49:34.414559       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0815 00:49:34.414593       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0815 00:49:39.318545       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0815 00:49:39.318576       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0815 00:49:43.039801       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 00:49:43.039839       1 main.go:299] handling current node
	
	
	==> kube-apiserver [dbe184e7f765be9ac66b9ce1c73b10856697e7599288f048bbd6ad3eec3068da] <==
	I0815 00:42:13.967783       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0815 00:42:13.977827       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0815 00:42:56.998903       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38598: use of closed network connection
	E0815 00:42:57.258759       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38630: use of closed network connection
	E0815 00:42:57.390493       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38658: use of closed network connection
	I0815 00:43:25.436324       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0815 00:43:58.975116       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.236.47"}
	I0815 00:44:04.212452       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 00:44:04.220622       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 00:44:04.244233       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 00:44:04.244367       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 00:44:04.253615       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 00:44:04.253672       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 00:44:04.262919       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 00:44:04.263034       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 00:44:04.317660       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 00:44:04.317714       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0815 00:44:05.253805       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0815 00:44:05.318428       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0815 00:44:05.415613       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0815 00:44:16.847205       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0815 00:44:17.884266       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0815 00:44:22.438678       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0815 00:44:22.744279       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.182.112"}
	I0815 00:46:41.777892       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.72.163"}
	
	
	==> kube-controller-manager [a7fc9ee7679f5dc3dbab1ba2d6296b173ac67b9478ee292fb4f606b343d35d55] <==
	W0815 00:48:00.238090       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:48:00.238240       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:48:10.753760       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:48:10.753804       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:48:13.895913       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:48:13.895957       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:48:16.138140       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:48:16.138190       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:48:43.376825       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:48:43.376948       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:48:49.674490       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:48:49.674531       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:48:53.439521       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:48:53.439569       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:49:01.535612       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:49:01.535659       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:49:19.457121       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:49:19.457327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:49:28.170561       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:49:28.170607       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:49:45.087459       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:49:45.087520       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0815 00:49:50.444431       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-8988944d9" duration="6.317µs"
	W0815 00:49:51.460276       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:49:51.460318       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [1a5db7b9949210333ac5a2849cfadc4f569ec584aad9d6eca7abf101875a7215] <==
	I0815 00:40:11.471016       1 server_linux.go:66] "Using iptables proxy"
	I0815 00:40:12.610931       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0815 00:40:12.611071       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 00:40:12.770544       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0815 00:40:12.770740       1 server_linux.go:169] "Using iptables Proxier"
	I0815 00:40:12.806111       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 00:40:12.806794       1 server.go:483] "Version info" version="v1.31.0"
	I0815 00:40:12.841767       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 00:40:12.914507       1 config.go:197] "Starting service config controller"
	I0815 00:40:12.914548       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 00:40:12.914573       1 config.go:104] "Starting endpoint slice config controller"
	I0815 00:40:12.914578       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 00:40:12.915044       1 config.go:326] "Starting node config controller"
	I0815 00:40:12.915063       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 00:40:13.050730       1 shared_informer.go:320] Caches are synced for node config
	I0815 00:40:13.051674       1 shared_informer.go:320] Caches are synced for service config
	I0815 00:40:13.051698       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b5b63db1c68aad62aae49b33186fdad027076508c87358bb463048e4c65b78e1] <==
	W0815 00:39:58.145746       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 00:39:58.145810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:39:58.145928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0815 00:39:58.145983       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 00:39:58.146080       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 00:39:58.146131       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:39:58.146250       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0815 00:39:58.146444       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:39:58.146320       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 00:39:58.146619       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:39:58.146588       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0815 00:39:58.146733       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:39:59.067754       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0815 00:39:59.067890       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:39:59.078935       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0815 00:39:59.079080       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 00:39:59.230101       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 00:39:59.230150       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:39:59.253034       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0815 00:39:59.253175       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:39:59.320178       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 00:39:59.320312       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:39:59.539731       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 00:39:59.539778       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0815 00:40:01.931847       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 00:48:31 addons-177998 kubelet[1525]: E0815 00:48:31.242021    1525 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682911241737296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593922,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:48:41 addons-177998 kubelet[1525]: E0815 00:48:41.245325    1525 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682921244981822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593922,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:48:41 addons-177998 kubelet[1525]: E0815 00:48:41.245365    1525 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682921244981822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593922,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:48:51 addons-177998 kubelet[1525]: E0815 00:48:51.247500    1525 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682931247284454,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593922,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:48:51 addons-177998 kubelet[1525]: E0815 00:48:51.247535    1525 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682931247284454,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593922,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:49:01 addons-177998 kubelet[1525]: E0815 00:49:01.249749    1525 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682941249458913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593922,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:49:01 addons-177998 kubelet[1525]: E0815 00:49:01.249790    1525 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682941249458913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593922,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:49:11 addons-177998 kubelet[1525]: E0815 00:49:11.251955    1525 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682951251717209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593922,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:49:11 addons-177998 kubelet[1525]: E0815 00:49:11.251994    1525 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682951251717209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593922,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:49:12 addons-177998 kubelet[1525]: I0815 00:49:12.044702    1525 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 15 00:49:21 addons-177998 kubelet[1525]: E0815 00:49:21.254767    1525 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682961254548217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593922,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:49:21 addons-177998 kubelet[1525]: E0815 00:49:21.254807    1525 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682961254548217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593922,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:49:31 addons-177998 kubelet[1525]: E0815 00:49:31.257881    1525 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682971257582403,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593922,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:49:31 addons-177998 kubelet[1525]: E0815 00:49:31.257924    1525 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682971257582403,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593922,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:49:41 addons-177998 kubelet[1525]: E0815 00:49:41.260754    1525 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682981260064968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593922,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:49:41 addons-177998 kubelet[1525]: E0815 00:49:41.260794    1525 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682981260064968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593922,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:49:50 addons-177998 kubelet[1525]: I0815 00:49:50.471813    1525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-nh2g8" podStartSLOduration=188.032178767 podStartE2EDuration="3m9.471794635s" podCreationTimestamp="2024-08-15 00:46:41 +0000 UTC" firstStartedPulling="2024-08-15 00:46:41.88717446 +0000 UTC m=+401.126127173" lastFinishedPulling="2024-08-15 00:46:43.326790328 +0000 UTC m=+402.565743041" observedRunningTime="2024-08-15 00:46:44.076444548 +0000 UTC m=+403.315397269" watchObservedRunningTime="2024-08-15 00:49:50.471794635 +0000 UTC m=+589.710747348"
	Aug 15 00:49:51 addons-177998 kubelet[1525]: E0815 00:49:51.263749    1525 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682991263392044,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593922,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:49:51 addons-177998 kubelet[1525]: E0815 00:49:51.263790    1525 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682991263392044,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593922,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:49:51 addons-177998 kubelet[1525]: I0815 00:49:51.859079    1525 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/727c86c4-3855-401b-98e3-b3bc46d8e36a-tmp-dir\") pod \"727c86c4-3855-401b-98e3-b3bc46d8e36a\" (UID: \"727c86c4-3855-401b-98e3-b3bc46d8e36a\") "
	Aug 15 00:49:51 addons-177998 kubelet[1525]: I0815 00:49:51.859135    1525 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7vct\" (UniqueName: \"kubernetes.io/projected/727c86c4-3855-401b-98e3-b3bc46d8e36a-kube-api-access-r7vct\") pod \"727c86c4-3855-401b-98e3-b3bc46d8e36a\" (UID: \"727c86c4-3855-401b-98e3-b3bc46d8e36a\") "
	Aug 15 00:49:51 addons-177998 kubelet[1525]: I0815 00:49:51.859691    1525 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/727c86c4-3855-401b-98e3-b3bc46d8e36a-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "727c86c4-3855-401b-98e3-b3bc46d8e36a" (UID: "727c86c4-3855-401b-98e3-b3bc46d8e36a"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 15 00:49:51 addons-177998 kubelet[1525]: I0815 00:49:51.866988    1525 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/727c86c4-3855-401b-98e3-b3bc46d8e36a-kube-api-access-r7vct" (OuterVolumeSpecName: "kube-api-access-r7vct") pod "727c86c4-3855-401b-98e3-b3bc46d8e36a" (UID: "727c86c4-3855-401b-98e3-b3bc46d8e36a"). InnerVolumeSpecName "kube-api-access-r7vct". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 15 00:49:51 addons-177998 kubelet[1525]: I0815 00:49:51.959663    1525 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/727c86c4-3855-401b-98e3-b3bc46d8e36a-tmp-dir\") on node \"addons-177998\" DevicePath \"\""
	Aug 15 00:49:51 addons-177998 kubelet[1525]: I0815 00:49:51.959706    1525 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-r7vct\" (UniqueName: \"kubernetes.io/projected/727c86c4-3855-401b-98e3-b3bc46d8e36a-kube-api-access-r7vct\") on node \"addons-177998\" DevicePath \"\""
	
	
	==> storage-provisioner [f60049867c20fc8cd5364cc07d6d0f67681b24ba063cc8b376da62f90ee2ddfb] <==
	I0815 00:40:54.539346       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 00:40:54.583782       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 00:40:54.584495       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 00:40:54.619934       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 00:40:54.623598       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-177998_55a21e83-1fac-4f1b-8c2e-e17c900684f0!
	I0815 00:40:54.620565       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"252a22bf-3495-4753-83b0-01175625f944", APIVersion:"v1", ResourceVersion:"939", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-177998_55a21e83-1fac-4f1b-8c2e-e17c900684f0 became leader
	I0815 00:40:54.724300       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-177998_55a21e83-1fac-4f1b-8c2e-e17c900684f0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-177998 -n addons-177998
helpers_test.go:261: (dbg) Run:  kubectl --context addons-177998 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (348.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (129.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-095774 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-095774 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m4.09219991s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:589: expected 3 nodes to be Ready, got 
-- stdout --
	NAME            STATUS     ROLES           AGE    VERSION
	ha-095774       NotReady   control-plane   12m    v1.31.0
	ha-095774-m02   Ready      control-plane   11m    v1.31.0
	ha-095774-m04   Ready      <none>          9m9s   v1.31.0

                                                
                                                
-- /stdout --
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:597: expected 3 nodes Ready status to be True, got 
-- stdout --
	' Unknown
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-095774
helpers_test.go:235: (dbg) docker inspect ha-095774:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "19e21c0763342b7a8fc977bf33ca962c388b91d0111aff9f6d8bdbc4cc7ffde6",
	        "Created": "2024-08-15T00:54:07.082348939Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1464883,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-15T01:04:30.283089944Z",
	            "FinishedAt": "2024-08-15T01:04:29.386695102Z"
	        },
	        "Image": "sha256:2b339a1cac4376103734d3066f7ccdf0ac7377a2f8f8d5eb9e81c29f3abcec50",
	        "ResolvConfPath": "/var/lib/docker/containers/19e21c0763342b7a8fc977bf33ca962c388b91d0111aff9f6d8bdbc4cc7ffde6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/19e21c0763342b7a8fc977bf33ca962c388b91d0111aff9f6d8bdbc4cc7ffde6/hostname",
	        "HostsPath": "/var/lib/docker/containers/19e21c0763342b7a8fc977bf33ca962c388b91d0111aff9f6d8bdbc4cc7ffde6/hosts",
	        "LogPath": "/var/lib/docker/containers/19e21c0763342b7a8fc977bf33ca962c388b91d0111aff9f6d8bdbc4cc7ffde6/19e21c0763342b7a8fc977bf33ca962c388b91d0111aff9f6d8bdbc4cc7ffde6-json.log",
	        "Name": "/ha-095774",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-095774:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-095774",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c15530d3be58c5d6f0047598b6923826bcd5291a4e91a1af7548f6883893a8fb-init/diff:/var/lib/docker/overlay2/433fc574d59582b9724e66836c411c49856e3ca47c5bf1f4fddf41d4347d66bc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c15530d3be58c5d6f0047598b6923826bcd5291a4e91a1af7548f6883893a8fb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c15530d3be58c5d6f0047598b6923826bcd5291a4e91a1af7548f6883893a8fb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c15530d3be58c5d6f0047598b6923826bcd5291a4e91a1af7548f6883893a8fb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-095774",
	                "Source": "/var/lib/docker/volumes/ha-095774/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-095774",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-095774",
	                "name.minikube.sigs.k8s.io": "ha-095774",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "01b84b5b1e89feef73fe097c1e4346117f4bc6405d1762d0fb2a342e8380552d",
	            "SandboxKey": "/var/run/docker/netns/01b84b5b1e89",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34660"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34661"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34664"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34662"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34663"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-095774": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "7753d14c7e38f6b120801ff704563f852fe997f2565268010ed3545d1aa489f5",
	                    "EndpointID": "660f828ea0a7c5868987df6595aaddf465d13a989e58f90e075cd6096df6d835",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-095774",
	                        "19e21c076334"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-095774 -n ha-095774
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ha-095774 logs -n 25: (2.06378166s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-095774 cp ha-095774-m03:/home/docker/cp-test.txt                              | ha-095774 | jenkins | v1.33.1 | 15 Aug 24 00:58 UTC | 15 Aug 24 00:58 UTC |
	|         | ha-095774-m04:/home/docker/cp-test_ha-095774-m03_ha-095774-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-095774 ssh -n                                                                 | ha-095774 | jenkins | v1.33.1 | 15 Aug 24 00:58 UTC | 15 Aug 24 00:58 UTC |
	|         | ha-095774-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-095774 ssh -n ha-095774-m04 sudo cat                                          | ha-095774 | jenkins | v1.33.1 | 15 Aug 24 00:58 UTC | 15 Aug 24 00:58 UTC |
	|         | /home/docker/cp-test_ha-095774-m03_ha-095774-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-095774 cp testdata/cp-test.txt                                                | ha-095774 | jenkins | v1.33.1 | 15 Aug 24 00:58 UTC | 15 Aug 24 00:58 UTC |
	|         | ha-095774-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-095774 ssh -n                                                                 | ha-095774 | jenkins | v1.33.1 | 15 Aug 24 00:58 UTC | 15 Aug 24 00:58 UTC |
	|         | ha-095774-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-095774 cp ha-095774-m04:/home/docker/cp-test.txt                              | ha-095774 | jenkins | v1.33.1 | 15 Aug 24 00:58 UTC | 15 Aug 24 00:58 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2559114583/001/cp-test_ha-095774-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-095774 ssh -n                                                                 | ha-095774 | jenkins | v1.33.1 | 15 Aug 24 00:58 UTC | 15 Aug 24 00:58 UTC |
	|         | ha-095774-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-095774 cp ha-095774-m04:/home/docker/cp-test.txt                              | ha-095774 | jenkins | v1.33.1 | 15 Aug 24 00:58 UTC | 15 Aug 24 00:58 UTC |
	|         | ha-095774:/home/docker/cp-test_ha-095774-m04_ha-095774.txt                       |           |         |         |                     |                     |
	| ssh     | ha-095774 ssh -n                                                                 | ha-095774 | jenkins | v1.33.1 | 15 Aug 24 00:58 UTC | 15 Aug 24 00:58 UTC |
	|         | ha-095774-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-095774 ssh -n ha-095774 sudo cat                                              | ha-095774 | jenkins | v1.33.1 | 15 Aug 24 00:58 UTC | 15 Aug 24 00:58 UTC |
	|         | /home/docker/cp-test_ha-095774-m04_ha-095774.txt                                 |           |         |         |                     |                     |
	| cp      | ha-095774 cp ha-095774-m04:/home/docker/cp-test.txt                              | ha-095774 | jenkins | v1.33.1 | 15 Aug 24 00:58 UTC | 15 Aug 24 00:58 UTC |
	|         | ha-095774-m02:/home/docker/cp-test_ha-095774-m04_ha-095774-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-095774 ssh -n                                                                 | ha-095774 | jenkins | v1.33.1 | 15 Aug 24 00:58 UTC | 15 Aug 24 00:58 UTC |
	|         | ha-095774-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-095774 ssh -n ha-095774-m02 sudo cat                                          | ha-095774 | jenkins | v1.33.1 | 15 Aug 24 00:58 UTC | 15 Aug 24 00:58 UTC |
	|         | /home/docker/cp-test_ha-095774-m04_ha-095774-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-095774 cp ha-095774-m04:/home/docker/cp-test.txt                              | ha-095774 | jenkins | v1.33.1 | 15 Aug 24 00:58 UTC | 15 Aug 24 00:58 UTC |
	|         | ha-095774-m03:/home/docker/cp-test_ha-095774-m04_ha-095774-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-095774 ssh -n                                                                 | ha-095774 | jenkins | v1.33.1 | 15 Aug 24 00:58 UTC | 15 Aug 24 00:58 UTC |
	|         | ha-095774-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-095774 ssh -n ha-095774-m03 sudo cat                                          | ha-095774 | jenkins | v1.33.1 | 15 Aug 24 00:58 UTC | 15 Aug 24 00:58 UTC |
	|         | /home/docker/cp-test_ha-095774-m04_ha-095774-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-095774 node stop m02 -v=7                                                     | ha-095774 | jenkins | v1.33.1 | 15 Aug 24 00:58 UTC | 15 Aug 24 00:58 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-095774 node start m02 -v=7                                                    | ha-095774 | jenkins | v1.33.1 | 15 Aug 24 00:58 UTC | 15 Aug 24 00:58 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-095774 -v=7                                                           | ha-095774 | jenkins | v1.33.1 | 15 Aug 24 00:58 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-095774 -v=7                                                                | ha-095774 | jenkins | v1.33.1 | 15 Aug 24 00:58 UTC | 15 Aug 24 00:59 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-095774 --wait=true -v=7                                                    | ha-095774 | jenkins | v1.33.1 | 15 Aug 24 00:59 UTC | 15 Aug 24 01:03 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-095774                                                                | ha-095774 | jenkins | v1.33.1 | 15 Aug 24 01:03 UTC |                     |
	| node    | ha-095774 node delete m03 -v=7                                                   | ha-095774 | jenkins | v1.33.1 | 15 Aug 24 01:03 UTC | 15 Aug 24 01:03 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-095774 stop -v=7                                                              | ha-095774 | jenkins | v1.33.1 | 15 Aug 24 01:03 UTC | 15 Aug 24 01:04 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-095774 --wait=true                                                         | ha-095774 | jenkins | v1.33.1 | 15 Aug 24 01:04 UTC | 15 Aug 24 01:06 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=docker                                                                  |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 01:04:29
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 01:04:29.772620 1464677 out.go:291] Setting OutFile to fd 1 ...
	I0815 01:04:29.772781 1464677 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:04:29.772792 1464677 out.go:304] Setting ErrFile to fd 2...
	I0815 01:04:29.772797 1464677 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:04:29.773050 1464677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-1398913/.minikube/bin
	I0815 01:04:29.773427 1464677 out.go:298] Setting JSON to false
	I0815 01:04:29.774318 1464677 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":35212,"bootTime":1723648658,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0815 01:04:29.774389 1464677 start.go:139] virtualization:  
	I0815 01:04:29.776982 1464677 out.go:177] * [ha-095774] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0815 01:04:29.779018 1464677 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 01:04:29.779116 1464677 notify.go:220] Checking for updates...
	I0815 01:04:29.783567 1464677 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 01:04:29.785416 1464677 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-1398913/kubeconfig
	I0815 01:04:29.787282 1464677 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-1398913/.minikube
	I0815 01:04:29.789459 1464677 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0815 01:04:29.791529 1464677 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 01:04:29.794206 1464677 config.go:182] Loaded profile config "ha-095774": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:04:29.794789 1464677 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 01:04:29.819408 1464677 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 01:04:29.819527 1464677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 01:04:29.883858 1464677 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:41 SystemTime:2024-08-15 01:04:29.873671468 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214908928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 01:04:29.883968 1464677 docker.go:307] overlay module found
	I0815 01:04:29.886534 1464677 out.go:177] * Using the docker driver based on existing profile
	I0815 01:04:29.888883 1464677 start.go:297] selected driver: docker
	I0815 01:04:29.888908 1464677 start.go:901] validating driver "docker" against &{Name:ha-095774 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-095774 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kub
evirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:04:29.889062 1464677 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 01:04:29.889175 1464677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 01:04:29.939086 1464677 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:41 SystemTime:2024-08-15 01:04:29.930426814 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214908928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 01:04:29.939501 1464677 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:04:29.939529 1464677 cni.go:84] Creating CNI manager for ""
	I0815 01:04:29.939537 1464677 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0815 01:04:29.939585 1464677 start.go:340] cluster config:
	{Name:ha-095774 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-095774 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device
-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval
:1m0s}
	I0815 01:04:29.943929 1464677 out.go:177] * Starting "ha-095774" primary control-plane node in "ha-095774" cluster
	I0815 01:04:29.946450 1464677 cache.go:121] Beginning downloading kic base image for docker with crio
	I0815 01:04:29.948912 1464677 out.go:177] * Pulling base image v0.0.44-1723650208-19443 ...
	I0815 01:04:29.951171 1464677 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:04:29.951249 1464677 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19443-1398913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	I0815 01:04:29.951263 1464677 cache.go:56] Caching tarball of preloaded images
	I0815 01:04:29.951424 1464677 preload.go:172] Found /home/jenkins/minikube-integration/19443-1398913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0815 01:04:29.951439 1464677 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 01:04:29.951600 1464677 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/config.json ...
	I0815 01:04:29.951251 1464677 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	W0815 01:04:29.970643 1464677 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 is of wrong architecture
	I0815 01:04:29.970666 1464677 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 01:04:29.970747 1464677 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0815 01:04:29.970769 1464677 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory, skipping pull
	I0815 01:04:29.970774 1464677 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 exists in cache, skipping pull
	I0815 01:04:29.970782 1464677 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 as a tarball
	I0815 01:04:29.970792 1464677 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from local cache
	I0815 01:04:29.972127 1464677 image.go:273] response: 
	I0815 01:04:30.127782 1464677 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from cached tarball
	I0815 01:04:30.127820 1464677 cache.go:194] Successfully downloaded all kic artifacts
	I0815 01:04:30.127867 1464677 start.go:360] acquireMachinesLock for ha-095774: {Name:mka6734aea0cb70c12411b866778f8c5eb419500 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:04:30.127945 1464677 start.go:364] duration metric: took 50.527µs to acquireMachinesLock for "ha-095774"
	I0815 01:04:30.127975 1464677 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:04:30.127982 1464677 fix.go:54] fixHost starting: 
	I0815 01:04:30.128287 1464677 cli_runner.go:164] Run: docker container inspect ha-095774 --format={{.State.Status}}
	I0815 01:04:30.152513 1464677 fix.go:112] recreateIfNeeded on ha-095774: state=Stopped err=<nil>
	W0815 01:04:30.152547 1464677 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:04:30.155528 1464677 out.go:177] * Restarting existing docker container for "ha-095774" ...
	I0815 01:04:30.157654 1464677 cli_runner.go:164] Run: docker start ha-095774
	I0815 01:04:30.457471 1464677 cli_runner.go:164] Run: docker container inspect ha-095774 --format={{.State.Status}}
	I0815 01:04:30.485208 1464677 kic.go:430] container "ha-095774" state is running.
	I0815 01:04:30.486286 1464677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-095774
	I0815 01:04:30.507385 1464677 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/config.json ...
	I0815 01:04:30.507624 1464677 machine.go:94] provisionDockerMachine start ...
	I0815 01:04:30.507683 1464677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774
	I0815 01:04:30.527331 1464677 main.go:141] libmachine: Using SSH client type: native
	I0815 01:04:30.527598 1464677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34660 <nil> <nil>}
	I0815 01:04:30.527614 1464677 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:04:30.528201 1464677 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0815 01:04:33.665935 1464677 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-095774
	
	I0815 01:04:33.665998 1464677 ubuntu.go:169] provisioning hostname "ha-095774"
	I0815 01:04:33.666104 1464677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774
	I0815 01:04:33.682694 1464677 main.go:141] libmachine: Using SSH client type: native
	I0815 01:04:33.682943 1464677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34660 <nil> <nil>}
	I0815 01:04:33.682958 1464677 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-095774 && echo "ha-095774" | sudo tee /etc/hostname
	I0815 01:04:33.830104 1464677 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-095774
	
	I0815 01:04:33.830181 1464677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774
	I0815 01:04:33.847140 1464677 main.go:141] libmachine: Using SSH client type: native
	I0815 01:04:33.847396 1464677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34660 <nil> <nil>}
	I0815 01:04:33.847416 1464677 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-095774' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-095774/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-095774' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:04:33.978442 1464677 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:04:33.978474 1464677 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19443-1398913/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-1398913/.minikube}
	I0815 01:04:33.978501 1464677 ubuntu.go:177] setting up certificates
	I0815 01:04:33.978510 1464677 provision.go:84] configureAuth start
	I0815 01:04:33.978572 1464677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-095774
	I0815 01:04:33.996049 1464677 provision.go:143] copyHostCerts
	I0815 01:04:33.996102 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19443-1398913/.minikube/key.pem
	I0815 01:04:33.996175 1464677 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-1398913/.minikube/key.pem, removing ...
	I0815 01:04:33.996187 1464677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-1398913/.minikube/key.pem
	I0815 01:04:33.996267 1464677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-1398913/.minikube/key.pem (1679 bytes)
	I0815 01:04:33.996363 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.pem
	I0815 01:04:33.996389 1464677 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.pem, removing ...
	I0815 01:04:33.996393 1464677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.pem
	I0815 01:04:33.996424 1464677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.pem (1082 bytes)
	I0815 01:04:33.996466 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19443-1398913/.minikube/cert.pem
	I0815 01:04:33.996493 1464677 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-1398913/.minikube/cert.pem, removing ...
	I0815 01:04:33.996498 1464677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-1398913/.minikube/cert.pem
	I0815 01:04:33.996529 1464677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-1398913/.minikube/cert.pem (1123 bytes)
	I0815 01:04:33.996587 1464677 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca-key.pem org=jenkins.ha-095774 san=[127.0.0.1 192.168.49.2 ha-095774 localhost minikube]
	I0815 01:04:34.631677 1464677 provision.go:177] copyRemoteCerts
	I0815 01:04:34.631758 1464677 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:04:34.631804 1464677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774
	I0815 01:04:34.649771 1464677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34660 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/ha-095774/id_rsa Username:docker}
	I0815 01:04:34.747358 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 01:04:34.747415 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 01:04:34.771989 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 01:04:34.772053 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0815 01:04:34.797236 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 01:04:34.797295 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 01:04:34.821282 1464677 provision.go:87] duration metric: took 842.756822ms to configureAuth
	I0815 01:04:34.821361 1464677 ubuntu.go:193] setting minikube options for container-runtime
	I0815 01:04:34.821624 1464677 config.go:182] Loaded profile config "ha-095774": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:04:34.821736 1464677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774
	I0815 01:04:34.837918 1464677 main.go:141] libmachine: Using SSH client type: native
	I0815 01:04:34.838163 1464677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34660 <nil> <nil>}
	I0815 01:04:34.838183 1464677 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:04:35.288595 1464677 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:04:35.288622 1464677 machine.go:97] duration metric: took 4.780987089s to provisionDockerMachine
	I0815 01:04:35.288634 1464677 start.go:293] postStartSetup for "ha-095774" (driver="docker")
	I0815 01:04:35.288645 1464677 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:04:35.288729 1464677 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:04:35.288780 1464677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774
	I0815 01:04:35.315466 1464677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34660 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/ha-095774/id_rsa Username:docker}
	I0815 01:04:35.413570 1464677 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:04:35.417277 1464677 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0815 01:04:35.417323 1464677 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0815 01:04:35.417334 1464677 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0815 01:04:35.417345 1464677 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0815 01:04:35.417356 1464677 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-1398913/.minikube/addons for local assets ...
	I0815 01:04:35.417439 1464677 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-1398913/.minikube/files for local assets ...
	I0815 01:04:35.417538 1464677 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-1398913/.minikube/files/etc/ssl/certs/14042982.pem -> 14042982.pem in /etc/ssl/certs
	I0815 01:04:35.417553 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/files/etc/ssl/certs/14042982.pem -> /etc/ssl/certs/14042982.pem
	I0815 01:04:35.417682 1464677 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:04:35.427046 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/files/etc/ssl/certs/14042982.pem --> /etc/ssl/certs/14042982.pem (1708 bytes)
	I0815 01:04:35.452459 1464677 start.go:296] duration metric: took 163.809ms for postStartSetup
	I0815 01:04:35.452573 1464677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 01:04:35.452621 1464677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774
	I0815 01:04:35.469298 1464677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34660 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/ha-095774/id_rsa Username:docker}
	I0815 01:04:35.564121 1464677 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0815 01:04:35.569066 1464677 fix.go:56] duration metric: took 5.441075916s for fixHost
	I0815 01:04:35.569098 1464677 start.go:83] releasing machines lock for "ha-095774", held for 5.441137322s
	I0815 01:04:35.569185 1464677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-095774
	I0815 01:04:35.585791 1464677 ssh_runner.go:195] Run: cat /version.json
	I0815 01:04:35.585849 1464677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774
	I0815 01:04:35.586160 1464677 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:04:35.586244 1464677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774
	I0815 01:04:35.604341 1464677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34660 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/ha-095774/id_rsa Username:docker}
	I0815 01:04:35.611750 1464677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34660 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/ha-095774/id_rsa Username:docker}
	I0815 01:04:35.693872 1464677 ssh_runner.go:195] Run: systemctl --version
	I0815 01:04:35.826133 1464677 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:04:35.970099 1464677 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 01:04:35.974741 1464677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:04:35.983557 1464677 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0815 01:04:35.983634 1464677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:04:35.992757 1464677 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0815 01:04:35.992828 1464677 start.go:495] detecting cgroup driver to use...
	I0815 01:04:35.992865 1464677 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0815 01:04:35.992920 1464677 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:04:36.012806 1464677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:04:36.027056 1464677 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:04:36.027172 1464677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:04:36.042691 1464677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:04:36.056381 1464677 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:04:36.157599 1464677 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:04:36.249317 1464677 docker.go:233] disabling docker service ...
	I0815 01:04:36.249434 1464677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:04:36.262050 1464677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:04:36.274022 1464677 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:04:36.365773 1464677 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:04:36.457343 1464677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:04:36.469223 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:04:36.485412 1464677 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 01:04:36.485521 1464677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:04:36.496392 1464677 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:04:36.496495 1464677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:04:36.507113 1464677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:04:36.518100 1464677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:04:36.529123 1464677 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:04:36.539433 1464677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:04:36.549679 1464677 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:04:36.560232 1464677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:04:36.570958 1464677 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:04:36.579708 1464677 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:04:36.588481 1464677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:04:36.682764 1464677 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:04:36.804340 1464677 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:04:36.804454 1464677 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:04:36.808176 1464677 start.go:563] Will wait 60s for crictl version
	I0815 01:04:36.808247 1464677 ssh_runner.go:195] Run: which crictl
	I0815 01:04:36.812478 1464677 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:04:36.857935 1464677 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0815 01:04:36.858017 1464677 ssh_runner.go:195] Run: crio --version
	I0815 01:04:36.895886 1464677 ssh_runner.go:195] Run: crio --version
	I0815 01:04:36.945854 1464677 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0815 01:04:36.947671 1464677 cli_runner.go:164] Run: docker network inspect ha-095774 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0815 01:04:36.963482 1464677 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0815 01:04:36.967269 1464677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:04:36.978152 1464677 kubeadm.go:883] updating cluster {Name:ha-095774 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-095774 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false l
ogviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:04:36.978314 1464677 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:04:36.978383 1464677 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:04:37.032356 1464677 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 01:04:37.032387 1464677 crio.go:433] Images already preloaded, skipping extraction
	I0815 01:04:37.032452 1464677 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:04:37.075912 1464677 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 01:04:37.075934 1464677 cache_images.go:84] Images are preloaded, skipping loading
	I0815 01:04:37.075952 1464677 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0815 01:04:37.076060 1464677 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-095774 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-095774 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:04:37.076157 1464677 ssh_runner.go:195] Run: crio config
	I0815 01:04:37.127252 1464677 cni.go:84] Creating CNI manager for ""
	I0815 01:04:37.127273 1464677 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0815 01:04:37.127284 1464677 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:04:37.127309 1464677 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-095774 NodeName:ha-095774 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 01:04:37.127503 1464677 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-095774"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:04:37.127524 1464677 kube-vip.go:115] generating kube-vip config ...
	I0815 01:04:37.127578 1464677 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0815 01:04:37.140613 1464677 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 01:04:37.140739 1464677 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 01:04:37.140801 1464677 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 01:04:37.150107 1464677 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:04:37.150177 1464677 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0815 01:04:37.159334 1464677 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0815 01:04:37.177845 1464677 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:04:37.196246 1464677 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0815 01:04:37.215634 1464677 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0815 01:04:37.234167 1464677 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0815 01:04:37.237847 1464677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:04:37.248951 1464677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:04:37.343053 1464677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:04:37.357275 1464677 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774 for IP: 192.168.49.2
	I0815 01:04:37.357315 1464677 certs.go:194] generating shared ca certs ...
	I0815 01:04:37.357332 1464677 certs.go:226] acquiring lock for ca certs: {Name:mk7828e60149aaf109ce40cae2b300a118fa9ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:04:37.357477 1464677 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.key
	I0815 01:04:37.357526 1464677 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/proxy-client-ca.key
	I0815 01:04:37.357540 1464677 certs.go:256] generating profile certs ...
	I0815 01:04:37.357617 1464677 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/client.key
	I0815 01:04:37.357648 1464677 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/apiserver.key.62576017
	I0815 01:04:37.357665 1464677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/apiserver.crt.62576017 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0815 01:04:37.769501 1464677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/apiserver.crt.62576017 ...
	I0815 01:04:37.769535 1464677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/apiserver.crt.62576017: {Name:mkc33297aef54d0446f34eb4c93f8014d96c0e2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:04:37.769736 1464677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/apiserver.key.62576017 ...
	I0815 01:04:37.769759 1464677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/apiserver.key.62576017: {Name:mk783f57c6a29b997a9844e82ae7a34e4ecdaaa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:04:37.769839 1464677 certs.go:381] copying /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/apiserver.crt.62576017 -> /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/apiserver.crt
	I0815 01:04:37.769983 1464677 certs.go:385] copying /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/apiserver.key.62576017 -> /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/apiserver.key
	I0815 01:04:37.770117 1464677 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/proxy-client.key
	I0815 01:04:37.770134 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 01:04:37.770149 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 01:04:37.770169 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 01:04:37.770182 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 01:04:37.770197 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 01:04:37.770211 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 01:04:37.770230 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 01:04:37.770248 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 01:04:37.770303 1464677 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/1404298.pem (1338 bytes)
	W0815 01:04:37.770337 1464677 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/1404298_empty.pem, impossibly tiny 0 bytes
	I0815 01:04:37.770349 1464677 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:04:37.770373 1464677 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem (1082 bytes)
	I0815 01:04:37.770419 1464677 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:04:37.770449 1464677 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/key.pem (1679 bytes)
	I0815 01:04:37.770500 1464677 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/files/etc/ssl/certs/14042982.pem (1708 bytes)
	I0815 01:04:37.770533 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:04:37.770549 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/1404298.pem -> /usr/share/ca-certificates/1404298.pem
	I0815 01:04:37.770560 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/files/etc/ssl/certs/14042982.pem -> /usr/share/ca-certificates/14042982.pem
	I0815 01:04:37.771185 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:04:37.801404 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:04:37.826642 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:04:37.852101 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 01:04:37.876192 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0815 01:04:37.902012 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 01:04:37.929171 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:04:37.953833 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 01:04:37.978513 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:04:38.004764 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/1404298.pem --> /usr/share/ca-certificates/1404298.pem (1338 bytes)
	I0815 01:04:38.037361 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/files/etc/ssl/certs/14042982.pem --> /usr/share/ca-certificates/14042982.pem (1708 bytes)
	I0815 01:04:38.067374 1464677 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:04:38.088737 1464677 ssh_runner.go:195] Run: openssl version
	I0815 01:04:38.094737 1464677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:04:38.104804 1464677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:04:38.108652 1464677 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:39 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:04:38.108721 1464677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:04:38.115919 1464677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:04:38.125645 1464677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1404298.pem && ln -fs /usr/share/ca-certificates/1404298.pem /etc/ssl/certs/1404298.pem"
	I0815 01:04:38.135761 1464677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1404298.pem
	I0815 01:04:38.139552 1464677 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:50 /usr/share/ca-certificates/1404298.pem
	I0815 01:04:38.139638 1464677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1404298.pem
	I0815 01:04:38.147062 1464677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1404298.pem /etc/ssl/certs/51391683.0"
	I0815 01:04:38.156474 1464677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14042982.pem && ln -fs /usr/share/ca-certificates/14042982.pem /etc/ssl/certs/14042982.pem"
	I0815 01:04:38.166212 1464677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14042982.pem
	I0815 01:04:38.170051 1464677 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:50 /usr/share/ca-certificates/14042982.pem
	I0815 01:04:38.170118 1464677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14042982.pem
	I0815 01:04:38.177177 1464677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14042982.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:04:38.186498 1464677 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:04:38.190069 1464677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:04:38.196738 1464677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:04:38.204494 1464677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:04:38.211547 1464677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:04:38.218333 1464677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:04:38.225416 1464677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:04:38.233046 1464677 kubeadm.go:392] StartCluster: {Name:ha-095774 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-095774 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logv
iewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:04:38.233187 1464677 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:04:38.233258 1464677 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:04:38.270463 1464677 cri.go:89] found id: ""
	I0815 01:04:38.270531 1464677 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:04:38.279695 1464677 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 01:04:38.279717 1464677 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 01:04:38.279797 1464677 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 01:04:38.288242 1464677 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:04:38.288736 1464677 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-095774" does not appear in /home/jenkins/minikube-integration/19443-1398913/kubeconfig
	I0815 01:04:38.288860 1464677 kubeconfig.go:62] /home/jenkins/minikube-integration/19443-1398913/kubeconfig needs updating (will repair): [kubeconfig missing "ha-095774" cluster setting kubeconfig missing "ha-095774" context setting]
	I0815 01:04:38.289128 1464677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-1398913/kubeconfig: {Name:mkbc924cd270a9bf83bc63fe6d76f87df76fc38b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:04:38.289546 1464677 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19443-1398913/kubeconfig
	I0815 01:04:38.289800 1464677 kapi.go:59] client config for ha-095774: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/client.crt", KeyFile:"/home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/client.key", CAFile:"/home/jenkins/minikube-integration/19443-1398913/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19cadb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 01:04:38.290585 1464677 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 01:04:38.290602 1464677 cert_rotation.go:140] Starting client certificate rotation controller
	I0815 01:04:38.299123 1464677 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.49.2
	I0815 01:04:38.299145 1464677 kubeadm.go:597] duration metric: took 19.42184ms to restartPrimaryControlPlane
	I0815 01:04:38.299154 1464677 kubeadm.go:394] duration metric: took 66.1178ms to StartCluster
	I0815 01:04:38.299170 1464677 settings.go:142] acquiring lock: {Name:mk702991e0e1159812b2000a3112e7b24af8d662 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:04:38.299257 1464677 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-1398913/kubeconfig
	I0815 01:04:38.299882 1464677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-1398913/kubeconfig: {Name:mkbc924cd270a9bf83bc63fe6d76f87df76fc38b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:04:38.300078 1464677 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 01:04:38.300097 1464677 start.go:241] waiting for startup goroutines ...
	I0815 01:04:38.300114 1464677 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 01:04:38.301015 1464677 config.go:182] Loaded profile config "ha-095774": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:04:38.304302 1464677 out.go:177] * Enabled addons: 
	I0815 01:04:38.306373 1464677 addons.go:510] duration metric: took 6.255709ms for enable addons: enabled=[]
	I0815 01:04:38.306426 1464677 start.go:246] waiting for cluster config update ...
	I0815 01:04:38.306436 1464677 start.go:255] writing updated cluster config ...
	I0815 01:04:38.309111 1464677 out.go:177] 
	I0815 01:04:38.311405 1464677 config.go:182] Loaded profile config "ha-095774": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:04:38.311518 1464677 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/config.json ...
	I0815 01:04:38.313890 1464677 out.go:177] * Starting "ha-095774-m02" control-plane node in "ha-095774" cluster
	I0815 01:04:38.315918 1464677 cache.go:121] Beginning downloading kic base image for docker with crio
	I0815 01:04:38.317982 1464677 out.go:177] * Pulling base image v0.0.44-1723650208-19443 ...
	I0815 01:04:38.320349 1464677 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:04:38.320395 1464677 cache.go:56] Caching tarball of preloaded images
	I0815 01:04:38.320421 1464677 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	I0815 01:04:38.320500 1464677 preload.go:172] Found /home/jenkins/minikube-integration/19443-1398913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0815 01:04:38.320512 1464677 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 01:04:38.320654 1464677 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/config.json ...
	W0815 01:04:38.338863 1464677 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 is of wrong architecture
	I0815 01:04:38.338885 1464677 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 01:04:38.338974 1464677 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0815 01:04:38.338996 1464677 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory, skipping pull
	I0815 01:04:38.339006 1464677 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 exists in cache, skipping pull
	I0815 01:04:38.339015 1464677 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 as a tarball
	I0815 01:04:38.339023 1464677 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from local cache
	I0815 01:04:38.340189 1464677 image.go:273] response: 
	I0815 01:04:38.473416 1464677 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from cached tarball
	I0815 01:04:38.473459 1464677 cache.go:194] Successfully downloaded all kic artifacts
	I0815 01:04:38.473491 1464677 start.go:360] acquireMachinesLock for ha-095774-m02: {Name:mka8a0cf28a1d7471d046130a3d032e0cc60f2ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:04:38.473556 1464677 start.go:364] duration metric: took 44.857µs to acquireMachinesLock for "ha-095774-m02"
	I0815 01:04:38.473584 1464677 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:04:38.473596 1464677 fix.go:54] fixHost starting: m02
	I0815 01:04:38.473887 1464677 cli_runner.go:164] Run: docker container inspect ha-095774-m02 --format={{.State.Status}}
	I0815 01:04:38.489788 1464677 fix.go:112] recreateIfNeeded on ha-095774-m02: state=Stopped err=<nil>
	W0815 01:04:38.489812 1464677 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:04:38.492595 1464677 out.go:177] * Restarting existing docker container for "ha-095774-m02" ...
	I0815 01:04:38.494768 1464677 cli_runner.go:164] Run: docker start ha-095774-m02
	I0815 01:04:38.785220 1464677 cli_runner.go:164] Run: docker container inspect ha-095774-m02 --format={{.State.Status}}
	I0815 01:04:38.803470 1464677 kic.go:430] container "ha-095774-m02" state is running.
	I0815 01:04:38.803832 1464677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-095774-m02
	I0815 01:04:38.830728 1464677 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/config.json ...
	I0815 01:04:38.830967 1464677 machine.go:94] provisionDockerMachine start ...
	I0815 01:04:38.831056 1464677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774-m02
	I0815 01:04:38.851925 1464677 main.go:141] libmachine: Using SSH client type: native
	I0815 01:04:38.852168 1464677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34665 <nil> <nil>}
	I0815 01:04:38.852183 1464677 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:04:38.853213 1464677 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0815 01:04:42.049238 1464677 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-095774-m02
	
	I0815 01:04:42.049313 1464677 ubuntu.go:169] provisioning hostname "ha-095774-m02"
	I0815 01:04:42.049429 1464677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774-m02
	I0815 01:04:42.080326 1464677 main.go:141] libmachine: Using SSH client type: native
	I0815 01:04:42.080575 1464677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34665 <nil> <nil>}
	I0815 01:04:42.080587 1464677 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-095774-m02 && echo "ha-095774-m02" | sudo tee /etc/hostname
	I0815 01:04:42.328032 1464677 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-095774-m02
	
	I0815 01:04:42.328189 1464677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774-m02
	I0815 01:04:42.365855 1464677 main.go:141] libmachine: Using SSH client type: native
	I0815 01:04:42.366117 1464677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34665 <nil> <nil>}
	I0815 01:04:42.366141 1464677 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-095774-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-095774-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-095774-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:04:42.540013 1464677 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:04:42.540092 1464677 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19443-1398913/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-1398913/.minikube}
	I0815 01:04:42.540125 1464677 ubuntu.go:177] setting up certificates
	I0815 01:04:42.540160 1464677 provision.go:84] configureAuth start
	I0815 01:04:42.540252 1464677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-095774-m02
	I0815 01:04:42.582623 1464677 provision.go:143] copyHostCerts
	I0815 01:04:42.582665 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.pem
	I0815 01:04:42.582699 1464677 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.pem, removing ...
	I0815 01:04:42.582705 1464677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.pem
	I0815 01:04:42.582783 1464677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.pem (1082 bytes)
	I0815 01:04:42.582860 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19443-1398913/.minikube/cert.pem
	I0815 01:04:42.582876 1464677 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-1398913/.minikube/cert.pem, removing ...
	I0815 01:04:42.582881 1464677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-1398913/.minikube/cert.pem
	I0815 01:04:42.582904 1464677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-1398913/.minikube/cert.pem (1123 bytes)
	I0815 01:04:42.582962 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19443-1398913/.minikube/key.pem
	I0815 01:04:42.582978 1464677 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-1398913/.minikube/key.pem, removing ...
	I0815 01:04:42.582982 1464677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-1398913/.minikube/key.pem
	I0815 01:04:42.583011 1464677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-1398913/.minikube/key.pem (1679 bytes)
	I0815 01:04:42.583100 1464677 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca-key.pem org=jenkins.ha-095774-m02 san=[127.0.0.1 192.168.49.3 ha-095774-m02 localhost minikube]
	I0815 01:04:43.122036 1464677 provision.go:177] copyRemoteCerts
	I0815 01:04:43.122159 1464677 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:04:43.122221 1464677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774-m02
	I0815 01:04:43.141116 1464677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34665 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/ha-095774-m02/id_rsa Username:docker}
	I0815 01:04:43.264730 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 01:04:43.264805 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 01:04:43.324410 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 01:04:43.324469 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 01:04:43.353223 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 01:04:43.353332 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 01:04:43.382577 1464677 provision.go:87] duration metric: took 842.385613ms to configureAuth
	I0815 01:04:43.382651 1464677 ubuntu.go:193] setting minikube options for container-runtime
	I0815 01:04:43.382931 1464677 config.go:182] Loaded profile config "ha-095774": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:04:43.383092 1464677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774-m02
	I0815 01:04:43.408069 1464677 main.go:141] libmachine: Using SSH client type: native
	I0815 01:04:43.408314 1464677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34665 <nil> <nil>}
	I0815 01:04:43.408328 1464677 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:04:43.826686 1464677 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:04:43.826709 1464677 machine.go:97] duration metric: took 4.995731787s to provisionDockerMachine
	I0815 01:04:43.826720 1464677 start.go:293] postStartSetup for "ha-095774-m02" (driver="docker")
	I0815 01:04:43.826731 1464677 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:04:43.826841 1464677 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:04:43.826881 1464677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774-m02
	I0815 01:04:43.858916 1464677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34665 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/ha-095774-m02/id_rsa Username:docker}
	I0815 01:04:44.011082 1464677 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:04:44.021559 1464677 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0815 01:04:44.021595 1464677 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0815 01:04:44.021606 1464677 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0815 01:04:44.021613 1464677 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0815 01:04:44.021624 1464677 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-1398913/.minikube/addons for local assets ...
	I0815 01:04:44.021677 1464677 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-1398913/.minikube/files for local assets ...
	I0815 01:04:44.021755 1464677 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-1398913/.minikube/files/etc/ssl/certs/14042982.pem -> 14042982.pem in /etc/ssl/certs
	I0815 01:04:44.021765 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/files/etc/ssl/certs/14042982.pem -> /etc/ssl/certs/14042982.pem
	I0815 01:04:44.021865 1464677 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:04:44.132865 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/files/etc/ssl/certs/14042982.pem --> /etc/ssl/certs/14042982.pem (1708 bytes)
	I0815 01:04:44.279658 1464677 start.go:296] duration metric: took 452.916411ms for postStartSetup
	I0815 01:04:44.279801 1464677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 01:04:44.279869 1464677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774-m02
	I0815 01:04:44.306645 1464677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34665 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/ha-095774-m02/id_rsa Username:docker}
	I0815 01:04:44.466646 1464677 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0815 01:04:44.493782 1464677 fix.go:56] duration metric: took 6.020177662s for fixHost
	I0815 01:04:44.493809 1464677 start.go:83] releasing machines lock for "ha-095774-m02", held for 6.020239003s
	I0815 01:04:44.493877 1464677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-095774-m02
	I0815 01:04:44.546146 1464677 out.go:177] * Found network options:
	I0815 01:04:44.548872 1464677 out.go:177]   - NO_PROXY=192.168.49.2
	W0815 01:04:44.551353 1464677 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 01:04:44.551396 1464677 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 01:04:44.551461 1464677 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:04:44.551518 1464677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774-m02
	I0815 01:04:44.551754 1464677 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:04:44.551804 1464677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774-m02
	I0815 01:04:44.597255 1464677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34665 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/ha-095774-m02/id_rsa Username:docker}
	I0815 01:04:44.598897 1464677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34665 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/ha-095774-m02/id_rsa Username:docker}
	I0815 01:04:45.076934 1464677 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 01:04:45.138376 1464677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:04:45.176538 1464677 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0815 01:04:45.176681 1464677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:04:45.235272 1464677 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0815 01:04:45.235302 1464677 start.go:495] detecting cgroup driver to use...
	I0815 01:04:45.235374 1464677 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0815 01:04:45.235456 1464677 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:04:45.286183 1464677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:04:45.327226 1464677 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:04:45.327327 1464677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:04:45.370468 1464677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:04:45.402314 1464677 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:04:45.644495 1464677 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:04:45.883686 1464677 docker.go:233] disabling docker service ...
	I0815 01:04:45.883788 1464677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:04:45.926669 1464677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:04:45.978061 1464677 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:04:46.307788 1464677 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:04:46.602886 1464677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:04:46.644850 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:04:46.718983 1464677 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 01:04:46.719089 1464677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:04:46.767280 1464677 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:04:46.767391 1464677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:04:46.834999 1464677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:04:46.865199 1464677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:04:46.882792 1464677 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:04:46.913797 1464677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:04:46.938774 1464677 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:04:46.953317 1464677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:04:46.973424 1464677 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:04:46.988811 1464677 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:04:47.029877 1464677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:04:47.262054 1464677 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:04:48.684577 1464677 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.422473417s)
	I0815 01:04:48.684618 1464677 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:04:48.684677 1464677 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:04:48.706782 1464677 start.go:563] Will wait 60s for crictl version
	I0815 01:04:48.706858 1464677 ssh_runner.go:195] Run: which crictl
	I0815 01:04:48.710310 1464677 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:04:48.800953 1464677 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0815 01:04:48.801040 1464677 ssh_runner.go:195] Run: crio --version
	I0815 01:04:48.880443 1464677 ssh_runner.go:195] Run: crio --version
	I0815 01:04:48.962574 1464677 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0815 01:04:48.964629 1464677 out.go:177]   - env NO_PROXY=192.168.49.2
	I0815 01:04:48.966321 1464677 cli_runner.go:164] Run: docker network inspect ha-095774 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0815 01:04:48.989540 1464677 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0815 01:04:48.993383 1464677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:04:49.010312 1464677 mustload.go:65] Loading cluster: ha-095774
	I0815 01:04:49.010583 1464677 config.go:182] Loaded profile config "ha-095774": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:04:49.010869 1464677 cli_runner.go:164] Run: docker container inspect ha-095774 --format={{.State.Status}}
	I0815 01:04:49.042421 1464677 host.go:66] Checking if "ha-095774" exists ...
	I0815 01:04:49.042724 1464677 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774 for IP: 192.168.49.3
	I0815 01:04:49.042739 1464677 certs.go:194] generating shared ca certs ...
	I0815 01:04:49.042754 1464677 certs.go:226] acquiring lock for ca certs: {Name:mk7828e60149aaf109ce40cae2b300a118fa9ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:04:49.042882 1464677 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.key
	I0815 01:04:49.042930 1464677 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/proxy-client-ca.key
	I0815 01:04:49.042945 1464677 certs.go:256] generating profile certs ...
	I0815 01:04:49.043041 1464677 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/client.key
	I0815 01:04:49.043113 1464677 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/apiserver.key.237af608
	I0815 01:04:49.043158 1464677 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/proxy-client.key
	I0815 01:04:49.043172 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 01:04:49.043187 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 01:04:49.043203 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 01:04:49.043213 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 01:04:49.043226 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 01:04:49.043243 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 01:04:49.043261 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 01:04:49.043273 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 01:04:49.043334 1464677 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/1404298.pem (1338 bytes)
	W0815 01:04:49.043369 1464677 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/1404298_empty.pem, impossibly tiny 0 bytes
	I0815 01:04:49.043382 1464677 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:04:49.043409 1464677 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem (1082 bytes)
	I0815 01:04:49.043437 1464677 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:04:49.043464 1464677 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/key.pem (1679 bytes)
	I0815 01:04:49.043513 1464677 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/files/etc/ssl/certs/14042982.pem (1708 bytes)
	I0815 01:04:49.043545 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/files/etc/ssl/certs/14042982.pem -> /usr/share/ca-certificates/14042982.pem
	I0815 01:04:49.043563 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:04:49.043574 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/1404298.pem -> /usr/share/ca-certificates/1404298.pem
	I0815 01:04:49.043638 1464677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774
	I0815 01:04:49.075117 1464677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34660 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/ha-095774/id_rsa Username:docker}
	I0815 01:04:49.186707 1464677 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0815 01:04:49.200718 1464677 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0815 01:04:49.230975 1464677 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0815 01:04:49.241617 1464677 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0815 01:04:49.277265 1464677 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0815 01:04:49.290116 1464677 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0815 01:04:49.312979 1464677 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0815 01:04:49.328063 1464677 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0815 01:04:49.361077 1464677 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0815 01:04:49.377155 1464677 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0815 01:04:49.406309 1464677 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0815 01:04:49.424002 1464677 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0815 01:04:49.457618 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:04:49.494951 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:04:49.539743 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:04:49.587925 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 01:04:49.631975 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0815 01:04:49.675435 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 01:04:49.725639 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:04:49.777131 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 01:04:49.819514 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/files/etc/ssl/certs/14042982.pem --> /usr/share/ca-certificates/14042982.pem (1708 bytes)
	I0815 01:04:49.848912 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:04:49.876077 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/1404298.pem --> /usr/share/ca-certificates/1404298.pem (1338 bytes)
	I0815 01:04:49.926293 1464677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0815 01:04:49.982150 1464677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0815 01:04:50.037629 1464677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0815 01:04:50.078481 1464677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0815 01:04:50.104795 1464677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0815 01:04:50.128852 1464677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0815 01:04:50.150002 1464677 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0815 01:04:50.170991 1464677 ssh_runner.go:195] Run: openssl version
	I0815 01:04:50.177379 1464677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14042982.pem && ln -fs /usr/share/ca-certificates/14042982.pem /etc/ssl/certs/14042982.pem"
	I0815 01:04:50.187495 1464677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14042982.pem
	I0815 01:04:50.191752 1464677 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:50 /usr/share/ca-certificates/14042982.pem
	I0815 01:04:50.191825 1464677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14042982.pem
	I0815 01:04:50.199922 1464677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14042982.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:04:50.209453 1464677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:04:50.219348 1464677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:04:50.227761 1464677 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:39 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:04:50.227830 1464677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:04:50.235711 1464677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:04:50.245324 1464677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1404298.pem && ln -fs /usr/share/ca-certificates/1404298.pem /etc/ssl/certs/1404298.pem"
	I0815 01:04:50.255134 1464677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1404298.pem
	I0815 01:04:50.259621 1464677 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:50 /usr/share/ca-certificates/1404298.pem
	I0815 01:04:50.259689 1464677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1404298.pem
	I0815 01:04:50.267554 1464677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1404298.pem /etc/ssl/certs/51391683.0"
	I0815 01:04:50.276957 1464677 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:04:50.281434 1464677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:04:50.289244 1464677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:04:50.297760 1464677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:04:50.305236 1464677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:04:50.313118 1464677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:04:50.320348 1464677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:04:50.331813 1464677 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.31.0 crio true true} ...
	I0815 01:04:50.331916 1464677 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-095774-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-095774 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:04:50.331948 1464677 kube-vip.go:115] generating kube-vip config ...
	I0815 01:04:50.332001 1464677 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0815 01:04:50.347867 1464677 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 01:04:50.347933 1464677 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 01:04:50.347999 1464677 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 01:04:50.357662 1464677 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:04:50.357736 1464677 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0815 01:04:50.367543 1464677 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0815 01:04:50.387571 1464677 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:04:50.409740 1464677 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0815 01:04:50.430185 1464677 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0815 01:04:50.435034 1464677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:04:50.448948 1464677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:04:50.601868 1464677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:04:50.617503 1464677 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 01:04:50.617866 1464677 config.go:182] Loaded profile config "ha-095774": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:04:50.621473 1464677 out.go:177] * Verifying Kubernetes components...
	I0815 01:04:50.623821 1464677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:04:50.778585 1464677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:04:50.805207 1464677 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19443-1398913/kubeconfig
	I0815 01:04:50.805571 1464677 kapi.go:59] client config for ha-095774: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/client.crt", KeyFile:"/home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/client.key", CAFile:"/home/jenkins/minikube-integration/19443-1398913/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19cadb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0815 01:04:50.805659 1464677 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0815 01:04:50.805940 1464677 node_ready.go:35] waiting up to 6m0s for node "ha-095774-m02" to be "Ready" ...
	I0815 01:04:50.806077 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m02
	I0815 01:04:50.806103 1464677 round_trippers.go:469] Request Headers:
	I0815 01:04:50.806128 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:04:50.806161 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:01.429550 1464677 round_trippers.go:574] Response Status: 500 Internal Server Error in 10623 milliseconds
	I0815 01:05:01.429780 1464677 node_ready.go:53] error getting node "ha-095774-m02": etcdserver: request timed out
	I0815 01:05:01.429838 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m02
	I0815 01:05:01.429844 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:01.429852 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:01.429856 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:09.069359 1464677 round_trippers.go:574] Response Status: 500 Internal Server Error in 7639 milliseconds
	I0815 01:05:09.069703 1464677 node_ready.go:53] error getting node "ha-095774-m02": etcdserver: leader changed
	I0815 01:05:09.069778 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m02
	I0815 01:05:09.069789 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:09.069798 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:09.069808 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:09.108108 1464677 round_trippers.go:574] Response Status: 200 OK in 38 milliseconds
	I0815 01:05:09.109855 1464677 node_ready.go:49] node "ha-095774-m02" has status "Ready":"True"
	I0815 01:05:09.109886 1464677 node_ready.go:38] duration metric: took 18.30390452s for node "ha-095774-m02" to be "Ready" ...
	I0815 01:05:09.109898 1464677 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:05:09.109941 1464677 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0815 01:05:09.109957 1464677 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0815 01:05:09.110017 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0815 01:05:09.110027 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:09.110036 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:09.110044 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:09.115690 1464677 round_trippers.go:574] Response Status: 429 Too Many Requests in 5 milliseconds
	I0815 01:05:10.116736 1464677 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0815 01:05:10.116803 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0815 01:05:10.116812 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:10.116823 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:10.116830 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:10.180040 1464677 round_trippers.go:574] Response Status: 429 Too Many Requests in 63 milliseconds
	I0815 01:05:11.182484 1464677 with_retry.go:234] Got a Retry-After 1s response for attempt 2 to https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0815 01:05:11.182562 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0815 01:05:11.182579 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:11.182595 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:11.182604 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:11.192773 1464677 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0815 01:05:11.219076 1464677 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-b4vhd" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:11.219202 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:05:11.219209 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:11.219226 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:11.219233 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:11.223975 1464677 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 01:05:11.225589 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:05:11.225617 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:11.225627 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:11.225631 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:11.229212 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:05:11.230461 1464677 pod_ready.go:92] pod "coredns-6f6b679f8f-b4vhd" in "kube-system" namespace has status "Ready":"True"
	I0815 01:05:11.230547 1464677 pod_ready.go:81] duration metric: took 11.433457ms for pod "coredns-6f6b679f8f-b4vhd" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:11.230588 1464677 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-tl9kf" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:11.230788 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-tl9kf
	I0815 01:05:11.230841 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:11.230871 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:11.230924 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:11.237013 1464677 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 01:05:11.238827 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:05:11.238909 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:11.238949 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:11.238986 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:11.245677 1464677 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 01:05:11.247434 1464677 pod_ready.go:92] pod "coredns-6f6b679f8f-tl9kf" in "kube-system" namespace has status "Ready":"True"
	I0815 01:05:11.247560 1464677 pod_ready.go:81] duration metric: took 16.876431ms for pod "coredns-6f6b679f8f-tl9kf" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:11.247606 1464677 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-095774" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:11.247780 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-095774
	I0815 01:05:11.247829 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:11.247868 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:11.247918 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:11.253038 1464677 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 01:05:11.254742 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:05:11.254828 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:11.254857 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:11.254907 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:11.261294 1464677 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 01:05:11.262808 1464677 pod_ready.go:92] pod "etcd-ha-095774" in "kube-system" namespace has status "Ready":"True"
	I0815 01:05:11.262882 1464677 pod_ready.go:81] duration metric: took 15.207508ms for pod "etcd-ha-095774" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:11.262920 1464677 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-095774-m02" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:11.263044 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-095774-m02
	I0815 01:05:11.263076 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:11.263135 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:11.263166 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:11.267867 1464677 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 01:05:11.269526 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m02
	I0815 01:05:11.269591 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:11.269624 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:11.269673 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:11.282449 1464677 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0815 01:05:11.283822 1464677 pod_ready.go:92] pod "etcd-ha-095774-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 01:05:11.283894 1464677 pod_ready.go:81] duration metric: took 20.926205ms for pod "etcd-ha-095774-m02" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:11.283920 1464677 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-095774-m03" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:11.284060 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-095774-m03
	I0815 01:05:11.284099 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:11.284132 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:11.284150 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:11.287108 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:05:11.383444 1464677 request.go:632] Waited for 95.224448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-095774-m03
	I0815 01:05:11.383551 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m03
	I0815 01:05:11.383572 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:11.383619 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:11.383640 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:11.387226 1464677 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0815 01:05:11.387654 1464677 pod_ready.go:97] node "ha-095774-m03" hosting pod "etcd-ha-095774-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-095774-m03": nodes "ha-095774-m03" not found
	I0815 01:05:11.387698 1464677 pod_ready.go:81] duration metric: took 103.733978ms for pod "etcd-ha-095774-m03" in "kube-system" namespace to be "Ready" ...
	E0815 01:05:11.387752 1464677 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-095774-m03" hosting pod "etcd-ha-095774-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-095774-m03": nodes "ha-095774-m03" not found
	I0815 01:05:11.387811 1464677 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-095774" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:11.583247 1464677 request.go:632] Waited for 195.324771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095774
	I0815 01:05:11.583310 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095774
	I0815 01:05:11.583321 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:11.583331 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:11.583337 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:11.586432 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:05:11.782545 1464677 request.go:632] Waited for 195.290868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:05:11.782610 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:05:11.782617 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:11.782625 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:11.782634 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:11.788366 1464677 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 01:05:11.789091 1464677 pod_ready.go:92] pod "kube-apiserver-ha-095774" in "kube-system" namespace has status "Ready":"True"
	I0815 01:05:11.789113 1464677 pod_ready.go:81] duration metric: took 401.270051ms for pod "kube-apiserver-ha-095774" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:11.789125 1464677 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-095774-m02" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:11.983218 1464677 request.go:632] Waited for 193.989876ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095774-m02
	I0815 01:05:11.983288 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095774-m02
	I0815 01:05:11.983298 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:11.983307 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:11.983315 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:11.986435 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:05:12.182552 1464677 request.go:632] Waited for 195.316452ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-095774-m02
	I0815 01:05:12.182642 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m02
	I0815 01:05:12.182655 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:12.182665 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:12.182672 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:12.185545 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:05:12.186360 1464677 pod_ready.go:92] pod "kube-apiserver-ha-095774-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 01:05:12.186382 1464677 pod_ready.go:81] duration metric: took 397.248978ms for pod "kube-apiserver-ha-095774-m02" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:12.186415 1464677 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-095774-m03" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:12.383217 1464677 request.go:632] Waited for 196.720156ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095774-m03
	I0815 01:05:12.383327 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095774-m03
	I0815 01:05:12.383398 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:12.383410 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:12.383417 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:12.386320 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:05:12.583547 1464677 request.go:632] Waited for 196.220214ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-095774-m03
	I0815 01:05:12.583606 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m03
	I0815 01:05:12.583619 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:12.583629 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:12.583638 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:12.586971 1464677 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0815 01:05:12.587174 1464677 pod_ready.go:97] node "ha-095774-m03" hosting pod "kube-apiserver-ha-095774-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-095774-m03": nodes "ha-095774-m03" not found
	I0815 01:05:12.587205 1464677 pod_ready.go:81] duration metric: took 400.76412ms for pod "kube-apiserver-ha-095774-m03" in "kube-system" namespace to be "Ready" ...
	E0815 01:05:12.587217 1464677 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-095774-m03" hosting pod "kube-apiserver-ha-095774-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-095774-m03": nodes "ha-095774-m03" not found
	I0815 01:05:12.587228 1464677 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-095774" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:12.783535 1464677 request.go:632] Waited for 196.23047ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-095774
	I0815 01:05:12.783600 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-095774
	I0815 01:05:12.783608 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:12.783628 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:12.783640 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:12.786595 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:05:12.982873 1464677 request.go:632] Waited for 195.386203ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:05:12.982941 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:05:12.982951 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:12.982960 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:12.982968 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:12.985812 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:05:12.986330 1464677 pod_ready.go:92] pod "kube-controller-manager-ha-095774" in "kube-system" namespace has status "Ready":"True"
	I0815 01:05:12.986353 1464677 pod_ready.go:81] duration metric: took 399.116391ms for pod "kube-controller-manager-ha-095774" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:12.986365 1464677 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-095774-m02" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:13.183313 1464677 request.go:632] Waited for 196.84324ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-095774-m02
	I0815 01:05:13.183397 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-095774-m02
	I0815 01:05:13.183409 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:13.183419 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:13.183426 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:13.186445 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:05:13.382596 1464677 request.go:632] Waited for 195.272178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-095774-m02
	I0815 01:05:13.382700 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m02
	I0815 01:05:13.382721 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:13.382749 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:13.382754 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:13.385633 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:05:13.386286 1464677 pod_ready.go:92] pod "kube-controller-manager-ha-095774-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 01:05:13.386306 1464677 pod_ready.go:81] duration metric: took 399.931318ms for pod "kube-controller-manager-ha-095774-m02" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:13.386319 1464677 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-095774-m03" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:13.583301 1464677 request.go:632] Waited for 196.916544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-095774-m03
	I0815 01:05:13.583364 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-095774-m03
	I0815 01:05:13.583375 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:13.583386 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:13.583395 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:13.587714 1464677 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 01:05:13.782571 1464677 request.go:632] Waited for 194.167943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-095774-m03
	I0815 01:05:13.782656 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m03
	I0815 01:05:13.782667 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:13.782677 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:13.782690 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:13.785318 1464677 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0815 01:05:13.785620 1464677 pod_ready.go:97] node "ha-095774-m03" hosting pod "kube-controller-manager-ha-095774-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-095774-m03": nodes "ha-095774-m03" not found
	I0815 01:05:13.785644 1464677 pod_ready.go:81] duration metric: took 399.318196ms for pod "kube-controller-manager-ha-095774-m03" in "kube-system" namespace to be "Ready" ...
	E0815 01:05:13.785656 1464677 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-095774-m03" hosting pod "kube-controller-manager-ha-095774-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-095774-m03": nodes "ha-095774-m03" not found
	I0815 01:05:13.785667 1464677 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7nfbl" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:13.982892 1464677 request.go:632] Waited for 197.150478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7nfbl
	I0815 01:05:13.982956 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7nfbl
	I0815 01:05:13.982963 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:13.982972 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:13.982984 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:13.985804 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:05:14.183198 1464677 request.go:632] Waited for 196.310988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-095774-m03
	I0815 01:05:14.183256 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m03
	I0815 01:05:14.183272 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:14.183281 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:14.183315 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:14.194605 1464677 round_trippers.go:574] Response Status: 404 Not Found in 11 milliseconds
	I0815 01:05:14.195287 1464677 pod_ready.go:97] node "ha-095774-m03" hosting pod "kube-proxy-7nfbl" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-095774-m03": nodes "ha-095774-m03" not found
	I0815 01:05:14.195325 1464677 pod_ready.go:81] duration metric: took 409.643414ms for pod "kube-proxy-7nfbl" in "kube-system" namespace to be "Ready" ...
	E0815 01:05:14.195336 1464677 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-095774-m03" hosting pod "kube-proxy-7nfbl" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-095774-m03": nodes "ha-095774-m03" not found
	I0815 01:05:14.195346 1464677 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p5kcz" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:14.382501 1464677 request.go:632] Waited for 187.071544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p5kcz
	I0815 01:05:14.382584 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p5kcz
	I0815 01:05:14.382599 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:14.382607 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:14.382617 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:14.387568 1464677 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 01:05:14.582545 1464677 request.go:632] Waited for 194.179316ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-095774-m04
	I0815 01:05:14.582612 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m04
	I0815 01:05:14.582622 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:14.582631 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:14.582640 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:14.587126 1464677 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 01:05:14.587822 1464677 pod_ready.go:92] pod "kube-proxy-p5kcz" in "kube-system" namespace has status "Ready":"True"
	I0815 01:05:14.587848 1464677 pod_ready.go:81] duration metric: took 392.48532ms for pod "kube-proxy-p5kcz" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:14.587861 1464677 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qfv9m" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:14.783540 1464677 request.go:632] Waited for 195.6029ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qfv9m
	I0815 01:05:14.783617 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qfv9m
	I0815 01:05:14.783629 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:14.783638 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:14.783641 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:14.787031 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:05:14.983093 1464677 request.go:632] Waited for 195.3305ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-095774-m02
	I0815 01:05:14.983156 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m02
	I0815 01:05:14.983167 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:14.983179 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:14.983184 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:14.986033 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:05:14.986769 1464677 pod_ready.go:92] pod "kube-proxy-qfv9m" in "kube-system" namespace has status "Ready":"True"
	I0815 01:05:14.986796 1464677 pod_ready.go:81] duration metric: took 398.926325ms for pod "kube-proxy-qfv9m" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:14.986807 1464677 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sdkx7" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:15.182592 1464677 request.go:632] Waited for 195.714711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sdkx7
	I0815 01:05:15.182692 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sdkx7
	I0815 01:05:15.182708 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:15.182719 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:15.182729 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:15.185803 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:05:15.382653 1464677 request.go:632] Waited for 196.172355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:05:15.382760 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:05:15.382771 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:15.382780 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:15.382787 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:15.385686 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:05:15.386294 1464677 pod_ready.go:92] pod "kube-proxy-sdkx7" in "kube-system" namespace has status "Ready":"True"
	I0815 01:05:15.386317 1464677 pod_ready.go:81] duration metric: took 399.501237ms for pod "kube-proxy-sdkx7" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:15.386330 1464677 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-095774" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:15.582548 1464677 request.go:632] Waited for 196.127762ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-095774
	I0815 01:05:15.582611 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-095774
	I0815 01:05:15.582617 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:15.582631 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:15.582638 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:15.585582 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:05:15.782482 1464677 request.go:632] Waited for 196.255325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:05:15.782541 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:05:15.782552 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:15.782562 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:15.782570 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:15.785143 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:05:15.785730 1464677 pod_ready.go:92] pod "kube-scheduler-ha-095774" in "kube-system" namespace has status "Ready":"True"
	I0815 01:05:15.785749 1464677 pod_ready.go:81] duration metric: took 399.408536ms for pod "kube-scheduler-ha-095774" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:15.785761 1464677 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-095774-m02" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:15.983286 1464677 request.go:632] Waited for 197.450538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-095774-m02
	I0815 01:05:15.983344 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-095774-m02
	I0815 01:05:15.983353 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:15.983368 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:15.983374 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:15.986192 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:05:16.183155 1464677 request.go:632] Waited for 196.36104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-095774-m02
	I0815 01:05:16.183309 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m02
	I0815 01:05:16.183322 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:16.183332 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:16.183338 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:16.186311 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:05:16.186921 1464677 pod_ready.go:92] pod "kube-scheduler-ha-095774-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 01:05:16.186943 1464677 pod_ready.go:81] duration metric: took 401.173181ms for pod "kube-scheduler-ha-095774-m02" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:16.186955 1464677 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-095774-m03" in "kube-system" namespace to be "Ready" ...
	I0815 01:05:16.383479 1464677 request.go:632] Waited for 196.437093ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-095774-m03
	I0815 01:05:16.383545 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-095774-m03
	I0815 01:05:16.383555 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:16.383568 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:16.383583 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:16.386456 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:05:16.583509 1464677 request.go:632] Waited for 196.365553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-095774-m03
	I0815 01:05:16.583623 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m03
	I0815 01:05:16.583636 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:16.583645 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:16.583650 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:16.586962 1464677 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0815 01:05:16.587110 1464677 pod_ready.go:97] node "ha-095774-m03" hosting pod "kube-scheduler-ha-095774-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-095774-m03": nodes "ha-095774-m03" not found
	I0815 01:05:16.587129 1464677 pod_ready.go:81] duration metric: took 400.166939ms for pod "kube-scheduler-ha-095774-m03" in "kube-system" namespace to be "Ready" ...
	E0815 01:05:16.587140 1464677 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-095774-m03" hosting pod "kube-scheduler-ha-095774-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-095774-m03": nodes "ha-095774-m03" not found
	I0815 01:05:16.587152 1464677 pod_ready.go:38] duration metric: took 7.477238585s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:05:16.587174 1464677 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:05:16.587235 1464677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:05:16.598542 1464677 api_server.go:72] duration metric: took 25.980637576s to wait for apiserver process to appear ...
	I0815 01:05:16.598571 1464677 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:05:16.598590 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:16.606803 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:16.606840 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:17.099510 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:17.107998 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:17.108026 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:17.599472 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:17.608013 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:17.608042 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:18.099654 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:18.107702 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:18.107731 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:18.599382 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:18.607136 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:18.607165 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:19.098713 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:19.108122 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:19.108156 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:19.599633 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:19.607580 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:19.607619 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:20.099553 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:20.109928 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:20.109966 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:20.599583 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:20.607448 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:20.607480 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:21.098702 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:21.108126 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:21.108156 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:21.599546 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:21.607263 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:21.607295 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:22.098661 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:22.107372 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:22.107451 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:22.599586 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:22.607664 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:22.607700 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:23.099280 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:23.107948 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:23.107978 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:23.599531 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:23.607246 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:23.607279 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:24.098709 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:24.106444 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:24.106491 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:24.598853 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:24.606848 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:24.606923 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:25.099748 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:25.108705 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:25.108741 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:25.598948 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:25.615773 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:25.615801 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:26.099272 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:26.108258 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:26.108337 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:26.598660 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:26.608088 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:26.608128 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:27.099521 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:27.108071 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:27.108103 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:27.599380 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:27.612349 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:27.612378 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:28.098669 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:28.106390 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:28.106447 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:28.599289 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:28.655017 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:28.655044 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:29.099518 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:29.166069 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:29.166100 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:29.599528 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:29.622034 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:29.622116 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:30.098904 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:30.108596 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:30.108695 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:30.599380 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:30.623795 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:30.623828 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:31.099457 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:31.108041 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:31.108091 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:31.598709 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:31.615816 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:31.615864 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:32.099568 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:32.108222 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:32.108254 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:32.598697 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:32.609001 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:32.609034 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:33.098664 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:33.107281 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:33.107317 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:33.598716 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:33.610803 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:33.610831 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:34.099386 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:34.107795 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:34.107828 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:34.599355 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:34.611098 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:34.611131 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:35.098704 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:35.106453 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:35.106483 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:35.598979 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:35.608053 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:35.608095 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:36.099525 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:36.124212 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:36.124246 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:36.598670 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:36.607071 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:36.607105 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:37.099556 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:37.108105 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:37.108137 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:37.598750 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:37.606369 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:37.606406 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:38.098905 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:38.107460 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:38.107489 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:38.599142 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:38.608504 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:38.608535 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:39.099253 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:39.107500 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:39.107531 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:39.598668 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:39.612575 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:39.612603 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:40.099501 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:40.108222 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:40.108252 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:40.598672 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:40.606856 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:40.606934 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:41.099062 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:41.108264 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:41.108304 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:41.598698 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:41.607820 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:41.607920 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:42.099369 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:42.112002 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:42.112151 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:42.598711 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:42.613125 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:42.613158 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:43.098707 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:43.108089 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:43.108129 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:43.599527 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:43.607424 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:43.607454 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:44.098726 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:44.106357 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:44.106387 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:44.598830 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:44.606927 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:44.606958 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:45.098701 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:45.121613 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:45.121661 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:45.599176 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:45.607258 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:45.607288 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:46.098722 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:46.107281 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:46.107314 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:46.599136 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:46.611179 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:46.611219 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:47.098808 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:47.106955 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:47.106984 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:47.599211 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:47.607411 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:47.607504 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:48.098692 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:48.106549 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:48.106580 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:48.599217 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:48.612647 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:48.612728 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:49.099295 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:49.106983 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:05:49.107018 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:05:49.599519 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:49.639240 1464677 api_server.go:269] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": read tcp 192.168.49.1:35774->192.168.49.2:8443: read: connection reset by peer
	I0815 01:05:50.099203 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:52.656232 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:05:52.656266 1464677 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:05:52.656294 1464677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:05:52.656359 1464677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:05:52.749198 1464677 cri.go:89] found id: "007f9711be5d58e6a660d69e7ffe3d9a8e79dc0eeeab842bcff3514cca65e9ab"
	I0815 01:05:52.749223 1464677 cri.go:89] found id: "cf7557e98965f86cd64120a9b4c749e26a201618750deff5afc87d9333358cb9"
	I0815 01:05:52.749228 1464677 cri.go:89] found id: ""
	I0815 01:05:52.749235 1464677 logs.go:276] 2 containers: [007f9711be5d58e6a660d69e7ffe3d9a8e79dc0eeeab842bcff3514cca65e9ab cf7557e98965f86cd64120a9b4c749e26a201618750deff5afc87d9333358cb9]
	I0815 01:05:52.749296 1464677 ssh_runner.go:195] Run: which crictl
	I0815 01:05:52.753376 1464677 ssh_runner.go:195] Run: which crictl
	I0815 01:05:52.759035 1464677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:05:52.759109 1464677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:05:52.825299 1464677 cri.go:89] found id: "17a6a0f74ae15d739afba06deba3323563a2e9f748a930b936e4915718a400c3"
	I0815 01:05:52.825325 1464677 cri.go:89] found id: "de320154c3b7615825541f41fe7f889c0aafbadedebb03a616477fc948ebf001"
	I0815 01:05:52.825330 1464677 cri.go:89] found id: ""
	I0815 01:05:52.825338 1464677 logs.go:276] 2 containers: [17a6a0f74ae15d739afba06deba3323563a2e9f748a930b936e4915718a400c3 de320154c3b7615825541f41fe7f889c0aafbadedebb03a616477fc948ebf001]
	I0815 01:05:52.825395 1464677 ssh_runner.go:195] Run: which crictl
	I0815 01:05:52.829397 1464677 ssh_runner.go:195] Run: which crictl
	I0815 01:05:52.837368 1464677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:05:52.837443 1464677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:05:52.930083 1464677 cri.go:89] found id: ""
	I0815 01:05:52.930111 1464677 logs.go:276] 0 containers: []
	W0815 01:05:52.930121 1464677 logs.go:278] No container was found matching "coredns"
	I0815 01:05:52.930127 1464677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:05:52.930200 1464677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:05:52.987857 1464677 cri.go:89] found id: "29a0bff352661657c16406bb8d980a4ebf415ee5fffd90dc47c9107e832205c0"
	I0815 01:05:52.987882 1464677 cri.go:89] found id: "14c1d43fc38013044abf77a5771ea3143f039d23a552331e7d791fae01d003d3"
	I0815 01:05:52.987887 1464677 cri.go:89] found id: ""
	I0815 01:05:52.987899 1464677 logs.go:276] 2 containers: [29a0bff352661657c16406bb8d980a4ebf415ee5fffd90dc47c9107e832205c0 14c1d43fc38013044abf77a5771ea3143f039d23a552331e7d791fae01d003d3]
	I0815 01:05:52.987956 1464677 ssh_runner.go:195] Run: which crictl
	I0815 01:05:52.992114 1464677 ssh_runner.go:195] Run: which crictl
	I0815 01:05:53.002911 1464677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:05:53.003016 1464677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:05:53.063925 1464677 cri.go:89] found id: "0c20773485340377a587b772639ca714dd51a48364f3aaff27509baddec5ef26"
	I0815 01:05:53.063956 1464677 cri.go:89] found id: ""
	I0815 01:05:53.063963 1464677 logs.go:276] 1 containers: [0c20773485340377a587b772639ca714dd51a48364f3aaff27509baddec5ef26]
	I0815 01:05:53.064033 1464677 ssh_runner.go:195] Run: which crictl
	I0815 01:05:53.067930 1464677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:05:53.068009 1464677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:05:53.128207 1464677 cri.go:89] found id: "8f23505cab6582c1481b26afe63551992c9672dc176906efcefebc1c7756f881"
	I0815 01:05:53.128233 1464677 cri.go:89] found id: "5c77c04cb2906b672d29c186e4343b561787088303943d20a09c2ecdcc62348f"
	I0815 01:05:53.128238 1464677 cri.go:89] found id: ""
	I0815 01:05:53.128245 1464677 logs.go:276] 2 containers: [8f23505cab6582c1481b26afe63551992c9672dc176906efcefebc1c7756f881 5c77c04cb2906b672d29c186e4343b561787088303943d20a09c2ecdcc62348f]
	I0815 01:05:53.128300 1464677 ssh_runner.go:195] Run: which crictl
	I0815 01:05:53.131880 1464677 ssh_runner.go:195] Run: which crictl
	I0815 01:05:53.135370 1464677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:05:53.135470 1464677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:05:53.173640 1464677 cri.go:89] found id: "7589e6cd006928c2c828938a07dab5b38dc3db2839520e3a6b6ceb81f24ff21d"
	I0815 01:05:53.173673 1464677 cri.go:89] found id: ""
	I0815 01:05:53.173682 1464677 logs.go:276] 1 containers: [7589e6cd006928c2c828938a07dab5b38dc3db2839520e3a6b6ceb81f24ff21d]
	I0815 01:05:53.173745 1464677 ssh_runner.go:195] Run: which crictl
	I0815 01:05:53.177603 1464677 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:05:53.177632 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 01:05:53.437812 1464677 logs.go:123] Gathering logs for kube-apiserver [007f9711be5d58e6a660d69e7ffe3d9a8e79dc0eeeab842bcff3514cca65e9ab] ...
	I0815 01:05:53.437847 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 007f9711be5d58e6a660d69e7ffe3d9a8e79dc0eeeab842bcff3514cca65e9ab"
	I0815 01:05:53.511287 1464677 logs.go:123] Gathering logs for kube-controller-manager [8f23505cab6582c1481b26afe63551992c9672dc176906efcefebc1c7756f881] ...
	I0815 01:05:53.511323 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f23505cab6582c1481b26afe63551992c9672dc176906efcefebc1c7756f881"
	I0815 01:05:53.573610 1464677 logs.go:123] Gathering logs for kubelet ...
	I0815 01:05:53.573645 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:05:53.650122 1464677 logs.go:123] Gathering logs for etcd [17a6a0f74ae15d739afba06deba3323563a2e9f748a930b936e4915718a400c3] ...
	I0815 01:05:53.650159 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17a6a0f74ae15d739afba06deba3323563a2e9f748a930b936e4915718a400c3"
	I0815 01:05:53.714265 1464677 logs.go:123] Gathering logs for etcd [de320154c3b7615825541f41fe7f889c0aafbadedebb03a616477fc948ebf001] ...
	I0815 01:05:53.714300 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de320154c3b7615825541f41fe7f889c0aafbadedebb03a616477fc948ebf001"
	I0815 01:05:53.796929 1464677 logs.go:123] Gathering logs for kindnet [7589e6cd006928c2c828938a07dab5b38dc3db2839520e3a6b6ceb81f24ff21d] ...
	I0815 01:05:53.796965 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7589e6cd006928c2c828938a07dab5b38dc3db2839520e3a6b6ceb81f24ff21d"
	I0815 01:05:53.871623 1464677 logs.go:123] Gathering logs for container status ...
	I0815 01:05:53.871657 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:05:53.956206 1464677 logs.go:123] Gathering logs for kube-scheduler [29a0bff352661657c16406bb8d980a4ebf415ee5fffd90dc47c9107e832205c0] ...
	I0815 01:05:53.956234 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29a0bff352661657c16406bb8d980a4ebf415ee5fffd90dc47c9107e832205c0"
	I0815 01:05:54.073794 1464677 logs.go:123] Gathering logs for kube-controller-manager [5c77c04cb2906b672d29c186e4343b561787088303943d20a09c2ecdcc62348f] ...
	I0815 01:05:54.073873 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c77c04cb2906b672d29c186e4343b561787088303943d20a09c2ecdcc62348f"
	I0815 01:05:54.115627 1464677 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:05:54.115665 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:05:54.194981 1464677 logs.go:123] Gathering logs for dmesg ...
	I0815 01:05:54.195017 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:05:54.216553 1464677 logs.go:123] Gathering logs for kube-apiserver [cf7557e98965f86cd64120a9b4c749e26a201618750deff5afc87d9333358cb9] ...
	I0815 01:05:54.216581 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf7557e98965f86cd64120a9b4c749e26a201618750deff5afc87d9333358cb9"
	I0815 01:05:54.253820 1464677 logs.go:123] Gathering logs for kube-scheduler [14c1d43fc38013044abf77a5771ea3143f039d23a552331e7d791fae01d003d3] ...
	I0815 01:05:54.253849 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14c1d43fc38013044abf77a5771ea3143f039d23a552331e7d791fae01d003d3"
	I0815 01:05:54.297423 1464677 logs.go:123] Gathering logs for kube-proxy [0c20773485340377a587b772639ca714dd51a48364f3aaff27509baddec5ef26] ...
	I0815 01:05:54.297452 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c20773485340377a587b772639ca714dd51a48364f3aaff27509baddec5ef26"
	I0815 01:05:56.836183 1464677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 01:05:56.845551 1464677 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0815 01:05:56.845631 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0815 01:05:56.845643 1464677 round_trippers.go:469] Request Headers:
	I0815 01:05:56.845653 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:05:56.845657 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:05:56.858740 1464677 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0815 01:05:56.858846 1464677 api_server.go:141] control plane version: v1.31.0
	I0815 01:05:56.858866 1464677 api_server.go:131] duration metric: took 40.26028763s to wait for apiserver health ...
	I0815 01:05:56.858875 1464677 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:05:56.858898 1464677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:05:56.858961 1464677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:05:56.899180 1464677 cri.go:89] found id: "007f9711be5d58e6a660d69e7ffe3d9a8e79dc0eeeab842bcff3514cca65e9ab"
	I0815 01:05:56.899221 1464677 cri.go:89] found id: "cf7557e98965f86cd64120a9b4c749e26a201618750deff5afc87d9333358cb9"
	I0815 01:05:56.899228 1464677 cri.go:89] found id: ""
	I0815 01:05:56.899248 1464677 logs.go:276] 2 containers: [007f9711be5d58e6a660d69e7ffe3d9a8e79dc0eeeab842bcff3514cca65e9ab cf7557e98965f86cd64120a9b4c749e26a201618750deff5afc87d9333358cb9]
	I0815 01:05:56.899309 1464677 ssh_runner.go:195] Run: which crictl
	I0815 01:05:56.903220 1464677 ssh_runner.go:195] Run: which crictl
	I0815 01:05:56.906934 1464677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:05:56.907028 1464677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:05:56.949353 1464677 cri.go:89] found id: "17a6a0f74ae15d739afba06deba3323563a2e9f748a930b936e4915718a400c3"
	I0815 01:05:56.949376 1464677 cri.go:89] found id: "de320154c3b7615825541f41fe7f889c0aafbadedebb03a616477fc948ebf001"
	I0815 01:05:56.949381 1464677 cri.go:89] found id: ""
	I0815 01:05:56.949418 1464677 logs.go:276] 2 containers: [17a6a0f74ae15d739afba06deba3323563a2e9f748a930b936e4915718a400c3 de320154c3b7615825541f41fe7f889c0aafbadedebb03a616477fc948ebf001]
	I0815 01:05:56.949490 1464677 ssh_runner.go:195] Run: which crictl
	I0815 01:05:56.953325 1464677 ssh_runner.go:195] Run: which crictl
	I0815 01:05:56.956820 1464677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:05:56.956888 1464677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:05:57.000893 1464677 cri.go:89] found id: ""
	I0815 01:05:57.000919 1464677 logs.go:276] 0 containers: []
	W0815 01:05:57.000937 1464677 logs.go:278] No container was found matching "coredns"
	I0815 01:05:57.000944 1464677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:05:57.001022 1464677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:05:57.065025 1464677 cri.go:89] found id: "29a0bff352661657c16406bb8d980a4ebf415ee5fffd90dc47c9107e832205c0"
	I0815 01:05:57.065100 1464677 cri.go:89] found id: "14c1d43fc38013044abf77a5771ea3143f039d23a552331e7d791fae01d003d3"
	I0815 01:05:57.065119 1464677 cri.go:89] found id: ""
	I0815 01:05:57.065142 1464677 logs.go:276] 2 containers: [29a0bff352661657c16406bb8d980a4ebf415ee5fffd90dc47c9107e832205c0 14c1d43fc38013044abf77a5771ea3143f039d23a552331e7d791fae01d003d3]
	I0815 01:05:57.065239 1464677 ssh_runner.go:195] Run: which crictl
	I0815 01:05:57.069940 1464677 ssh_runner.go:195] Run: which crictl
	I0815 01:05:57.074435 1464677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:05:57.074542 1464677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:05:57.117840 1464677 cri.go:89] found id: "0c20773485340377a587b772639ca714dd51a48364f3aaff27509baddec5ef26"
	I0815 01:05:57.117863 1464677 cri.go:89] found id: ""
	I0815 01:05:57.117871 1464677 logs.go:276] 1 containers: [0c20773485340377a587b772639ca714dd51a48364f3aaff27509baddec5ef26]
	I0815 01:05:57.117933 1464677 ssh_runner.go:195] Run: which crictl
	I0815 01:05:57.121721 1464677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:05:57.121791 1464677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:05:57.159954 1464677 cri.go:89] found id: "8f23505cab6582c1481b26afe63551992c9672dc176906efcefebc1c7756f881"
	I0815 01:05:57.159983 1464677 cri.go:89] found id: "5c77c04cb2906b672d29c186e4343b561787088303943d20a09c2ecdcc62348f"
	I0815 01:05:57.159988 1464677 cri.go:89] found id: ""
	I0815 01:05:57.159997 1464677 logs.go:276] 2 containers: [8f23505cab6582c1481b26afe63551992c9672dc176906efcefebc1c7756f881 5c77c04cb2906b672d29c186e4343b561787088303943d20a09c2ecdcc62348f]
	I0815 01:05:57.160072 1464677 ssh_runner.go:195] Run: which crictl
	I0815 01:05:57.164182 1464677 ssh_runner.go:195] Run: which crictl
	I0815 01:05:57.167928 1464677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:05:57.168031 1464677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:05:57.205526 1464677 cri.go:89] found id: "7589e6cd006928c2c828938a07dab5b38dc3db2839520e3a6b6ceb81f24ff21d"
	I0815 01:05:57.205551 1464677 cri.go:89] found id: ""
	I0815 01:05:57.205560 1464677 logs.go:276] 1 containers: [7589e6cd006928c2c828938a07dab5b38dc3db2839520e3a6b6ceb81f24ff21d]
	I0815 01:05:57.205651 1464677 ssh_runner.go:195] Run: which crictl
	I0815 01:05:57.209375 1464677 logs.go:123] Gathering logs for kube-proxy [0c20773485340377a587b772639ca714dd51a48364f3aaff27509baddec5ef26] ...
	I0815 01:05:57.209401 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c20773485340377a587b772639ca714dd51a48364f3aaff27509baddec5ef26"
	I0815 01:05:57.260239 1464677 logs.go:123] Gathering logs for kindnet [7589e6cd006928c2c828938a07dab5b38dc3db2839520e3a6b6ceb81f24ff21d] ...
	I0815 01:05:57.260268 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7589e6cd006928c2c828938a07dab5b38dc3db2839520e3a6b6ceb81f24ff21d"
	I0815 01:05:57.314879 1464677 logs.go:123] Gathering logs for kubelet ...
	I0815 01:05:57.314919 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:05:57.384255 1464677 logs.go:123] Gathering logs for kube-apiserver [cf7557e98965f86cd64120a9b4c749e26a201618750deff5afc87d9333358cb9] ...
	I0815 01:05:57.384294 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf7557e98965f86cd64120a9b4c749e26a201618750deff5afc87d9333358cb9"
	I0815 01:05:57.422574 1464677 logs.go:123] Gathering logs for etcd [17a6a0f74ae15d739afba06deba3323563a2e9f748a930b936e4915718a400c3] ...
	I0815 01:05:57.422602 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17a6a0f74ae15d739afba06deba3323563a2e9f748a930b936e4915718a400c3"
	I0815 01:05:57.479390 1464677 logs.go:123] Gathering logs for etcd [de320154c3b7615825541f41fe7f889c0aafbadedebb03a616477fc948ebf001] ...
	I0815 01:05:57.479468 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de320154c3b7615825541f41fe7f889c0aafbadedebb03a616477fc948ebf001"
	I0815 01:05:57.551693 1464677 logs.go:123] Gathering logs for kube-scheduler [29a0bff352661657c16406bb8d980a4ebf415ee5fffd90dc47c9107e832205c0] ...
	I0815 01:05:57.551727 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29a0bff352661657c16406bb8d980a4ebf415ee5fffd90dc47c9107e832205c0"
	I0815 01:05:57.611688 1464677 logs.go:123] Gathering logs for kube-scheduler [14c1d43fc38013044abf77a5771ea3143f039d23a552331e7d791fae01d003d3] ...
	I0815 01:05:57.611725 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14c1d43fc38013044abf77a5771ea3143f039d23a552331e7d791fae01d003d3"
	I0815 01:05:57.649613 1464677 logs.go:123] Gathering logs for kube-controller-manager [5c77c04cb2906b672d29c186e4343b561787088303943d20a09c2ecdcc62348f] ...
	I0815 01:05:57.649692 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c77c04cb2906b672d29c186e4343b561787088303943d20a09c2ecdcc62348f"
	I0815 01:05:57.692970 1464677 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:05:57.692997 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:05:57.765569 1464677 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:05:57.765604 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 01:05:58.003205 1464677 logs.go:123] Gathering logs for kube-apiserver [007f9711be5d58e6a660d69e7ffe3d9a8e79dc0eeeab842bcff3514cca65e9ab] ...
	I0815 01:05:58.003243 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 007f9711be5d58e6a660d69e7ffe3d9a8e79dc0eeeab842bcff3514cca65e9ab"
	I0815 01:05:58.072401 1464677 logs.go:123] Gathering logs for dmesg ...
	I0815 01:05:58.072437 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:05:58.090558 1464677 logs.go:123] Gathering logs for kube-controller-manager [8f23505cab6582c1481b26afe63551992c9672dc176906efcefebc1c7756f881] ...
	I0815 01:05:58.090587 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f23505cab6582c1481b26afe63551992c9672dc176906efcefebc1c7756f881"
	I0815 01:05:58.157277 1464677 logs.go:123] Gathering logs for container status ...
	I0815 01:05:58.157314 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:06:00.703633 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0815 01:06:00.703672 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:00.703698 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:00.703711 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:00.711895 1464677 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0815 01:06:00.720602 1464677 system_pods.go:59] 19 kube-system pods found
	I0815 01:06:00.720646 1464677 system_pods.go:61] "coredns-6f6b679f8f-b4vhd" [d9b2f6ab-12ac-40f3-bf9a-13e554f68ee2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 01:06:00.720656 1464677 system_pods.go:61] "coredns-6f6b679f8f-tl9kf" [6d04f7e1-ba5f-4d17-b68d-eaa607ad7209] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 01:06:00.720686 1464677 system_pods.go:61] "etcd-ha-095774" [af3a4697-2a4d-4b5f-aa1a-b4971d6d27d1] Running
	I0815 01:06:00.720700 1464677 system_pods.go:61] "etcd-ha-095774-m02" [223577ac-be59-44b2-af8f-a50d5c91e5a6] Running
	I0815 01:06:00.720709 1464677 system_pods.go:61] "kindnet-dbrtf" [1cc95641-1d2a-4820-ab48-4b3c6f7369cd] Running
	I0815 01:06:00.720719 1464677 system_pods.go:61] "kindnet-lgfzw" [a566f19f-a9bf-4120-9447-1f829394413e] Running
	I0815 01:06:00.720732 1464677 system_pods.go:61] "kindnet-wkfn6" [9b55e77c-1c6f-400b-a94f-f580aa486c4e] Running
	I0815 01:06:00.720758 1464677 system_pods.go:61] "kube-apiserver-ha-095774" [9c50ab04-c5db-4c44-8aae-fdff984aa89a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 01:06:00.720772 1464677 system_pods.go:61] "kube-apiserver-ha-095774-m02" [05a67694-3e1d-4d4c-a3ee-7af7b6118258] Running
	I0815 01:06:00.720791 1464677 system_pods.go:61] "kube-controller-manager-ha-095774" [0e9cd417-6b1f-42f6-beb3-2b2c32217236] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 01:06:00.720803 1464677 system_pods.go:61] "kube-controller-manager-ha-095774-m02" [76597117-b47b-47f7-9c25-b6a12c97ce9d] Running
	I0815 01:06:00.720809 1464677 system_pods.go:61] "kube-proxy-p5kcz" [f497389a-dbdc-4e00-ae43-c75e5b775caf] Running
	I0815 01:06:00.720813 1464677 system_pods.go:61] "kube-proxy-qfv9m" [5aa0a744-0aad-41ac-bd32-027c90d518c1] Running
	I0815 01:06:00.720822 1464677 system_pods.go:61] "kube-proxy-sdkx7" [963cf5da-ff1c-465e-8d0a-a45eee966b39] Running
	I0815 01:06:00.720826 1464677 system_pods.go:61] "kube-scheduler-ha-095774" [339da6d1-fd85-4443-9db3-2871a8bc8b09] Running
	I0815 01:06:00.720831 1464677 system_pods.go:61] "kube-scheduler-ha-095774-m02" [f6340b5a-1af4-44a0-b1d7-34f778b095c3] Running
	I0815 01:06:00.720840 1464677 system_pods.go:61] "kube-vip-ha-095774" [05da43b7-8e00-442a-9c57-58f9a5428314] Running
	I0815 01:06:00.720844 1464677 system_pods.go:61] "kube-vip-ha-095774-m02" [6b5b1372-19ef-4d20-8bec-af8bfb4a679f] Running
	I0815 01:06:00.720848 1464677 system_pods.go:61] "storage-provisioner" [847af497-7eec-470f-af0d-5e108a7a213d] Running
	I0815 01:06:00.720869 1464677 system_pods.go:74] duration metric: took 3.861986165s to wait for pod list to return data ...
	I0815 01:06:00.720883 1464677 default_sa.go:34] waiting for default service account to be created ...
	I0815 01:06:00.721000 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0815 01:06:00.721013 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:00.721021 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:00.721027 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:00.726745 1464677 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 01:06:00.727577 1464677 default_sa.go:45] found service account: "default"
	I0815 01:06:00.727600 1464677 default_sa.go:55] duration metric: took 6.709631ms for default service account to be created ...
	I0815 01:06:00.727636 1464677 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 01:06:00.727724 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0815 01:06:00.727737 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:00.727746 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:00.727751 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:00.810821 1464677 round_trippers.go:574] Response Status: 200 OK in 83 milliseconds
	I0815 01:06:00.818985 1464677 system_pods.go:86] 19 kube-system pods found
	I0815 01:06:00.819031 1464677 system_pods.go:89] "coredns-6f6b679f8f-b4vhd" [d9b2f6ab-12ac-40f3-bf9a-13e554f68ee2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 01:06:00.819043 1464677 system_pods.go:89] "coredns-6f6b679f8f-tl9kf" [6d04f7e1-ba5f-4d17-b68d-eaa607ad7209] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 01:06:00.819050 1464677 system_pods.go:89] "etcd-ha-095774" [af3a4697-2a4d-4b5f-aa1a-b4971d6d27d1] Running
	I0815 01:06:00.819057 1464677 system_pods.go:89] "etcd-ha-095774-m02" [223577ac-be59-44b2-af8f-a50d5c91e5a6] Running
	I0815 01:06:00.819070 1464677 system_pods.go:89] "kindnet-dbrtf" [1cc95641-1d2a-4820-ab48-4b3c6f7369cd] Running
	I0815 01:06:00.819074 1464677 system_pods.go:89] "kindnet-lgfzw" [a566f19f-a9bf-4120-9447-1f829394413e] Running
	I0815 01:06:00.819079 1464677 system_pods.go:89] "kindnet-wkfn6" [9b55e77c-1c6f-400b-a94f-f580aa486c4e] Running
	I0815 01:06:00.819086 1464677 system_pods.go:89] "kube-apiserver-ha-095774" [9c50ab04-c5db-4c44-8aae-fdff984aa89a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 01:06:00.819097 1464677 system_pods.go:89] "kube-apiserver-ha-095774-m02" [05a67694-3e1d-4d4c-a3ee-7af7b6118258] Running
	I0815 01:06:00.819112 1464677 system_pods.go:89] "kube-controller-manager-ha-095774" [0e9cd417-6b1f-42f6-beb3-2b2c32217236] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 01:06:00.819120 1464677 system_pods.go:89] "kube-controller-manager-ha-095774-m02" [76597117-b47b-47f7-9c25-b6a12c97ce9d] Running
	I0815 01:06:00.819132 1464677 system_pods.go:89] "kube-proxy-p5kcz" [f497389a-dbdc-4e00-ae43-c75e5b775caf] Running
	I0815 01:06:00.819139 1464677 system_pods.go:89] "kube-proxy-qfv9m" [5aa0a744-0aad-41ac-bd32-027c90d518c1] Running
	I0815 01:06:00.819143 1464677 system_pods.go:89] "kube-proxy-sdkx7" [963cf5da-ff1c-465e-8d0a-a45eee966b39] Running
	I0815 01:06:00.819153 1464677 system_pods.go:89] "kube-scheduler-ha-095774" [339da6d1-fd85-4443-9db3-2871a8bc8b09] Running
	I0815 01:06:00.819157 1464677 system_pods.go:89] "kube-scheduler-ha-095774-m02" [f6340b5a-1af4-44a0-b1d7-34f778b095c3] Running
	I0815 01:06:00.819166 1464677 system_pods.go:89] "kube-vip-ha-095774" [05da43b7-8e00-442a-9c57-58f9a5428314] Running
	I0815 01:06:00.819170 1464677 system_pods.go:89] "kube-vip-ha-095774-m02" [6b5b1372-19ef-4d20-8bec-af8bfb4a679f] Running
	I0815 01:06:00.819174 1464677 system_pods.go:89] "storage-provisioner" [847af497-7eec-470f-af0d-5e108a7a213d] Running
	I0815 01:06:00.819185 1464677 system_pods.go:126] duration metric: took 91.534982ms to wait for k8s-apps to be running ...
	I0815 01:06:00.819199 1464677 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 01:06:00.819265 1464677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:06:00.842363 1464677 system_svc.go:56] duration metric: took 23.154323ms WaitForService to wait for kubelet
	I0815 01:06:00.842390 1464677 kubeadm.go:582] duration metric: took 1m10.224489456s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:06:00.842424 1464677 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:06:00.842500 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0815 01:06:00.842511 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:00.842520 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:00.842524 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:00.859278 1464677 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0815 01:06:00.861500 1464677 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0815 01:06:00.861538 1464677 node_conditions.go:123] node cpu capacity is 2
	I0815 01:06:00.861550 1464677 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0815 01:06:00.861555 1464677 node_conditions.go:123] node cpu capacity is 2
	I0815 01:06:00.861561 1464677 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0815 01:06:00.861566 1464677 node_conditions.go:123] node cpu capacity is 2
	I0815 01:06:00.861571 1464677 node_conditions.go:105] duration metric: took 19.141652ms to run NodePressure ...
	I0815 01:06:00.861584 1464677 start.go:241] waiting for startup goroutines ...
	I0815 01:06:00.861610 1464677 start.go:255] writing updated cluster config ...
	I0815 01:06:00.865205 1464677 out.go:177] 
	I0815 01:06:00.868053 1464677 config.go:182] Loaded profile config "ha-095774": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:06:00.868181 1464677 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/config.json ...
	I0815 01:06:00.871268 1464677 out.go:177] * Starting "ha-095774-m04" worker node in "ha-095774" cluster
	I0815 01:06:00.874624 1464677 cache.go:121] Beginning downloading kic base image for docker with crio
	I0815 01:06:00.877328 1464677 out.go:177] * Pulling base image v0.0.44-1723650208-19443 ...
	I0815 01:06:00.879850 1464677 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:06:00.879904 1464677 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	I0815 01:06:00.879887 1464677 cache.go:56] Caching tarball of preloaded images
	I0815 01:06:00.880175 1464677 preload.go:172] Found /home/jenkins/minikube-integration/19443-1398913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0815 01:06:00.880194 1464677 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 01:06:00.880335 1464677 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/config.json ...
	W0815 01:06:00.897993 1464677 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 is of wrong architecture
	I0815 01:06:00.898014 1464677 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 01:06:00.898092 1464677 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0815 01:06:00.898116 1464677 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory, skipping pull
	I0815 01:06:00.898125 1464677 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 exists in cache, skipping pull
	I0815 01:06:00.898134 1464677 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 as a tarball
	I0815 01:06:00.898139 1464677 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from local cache
	I0815 01:06:00.899444 1464677 image.go:273] response: 
	I0815 01:06:01.035448 1464677 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from cached tarball
	I0815 01:06:01.035491 1464677 cache.go:194] Successfully downloaded all kic artifacts
	I0815 01:06:01.035526 1464677 start.go:360] acquireMachinesLock for ha-095774-m04: {Name:mkdf3800061d4ba800392fa047398d1e78506b53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:06:01.035613 1464677 start.go:364] duration metric: took 64.549µs to acquireMachinesLock for "ha-095774-m04"
	I0815 01:06:01.035637 1464677 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:06:01.035649 1464677 fix.go:54] fixHost starting: m04
	I0815 01:06:01.035947 1464677 cli_runner.go:164] Run: docker container inspect ha-095774-m04 --format={{.State.Status}}
	I0815 01:06:01.052767 1464677 fix.go:112] recreateIfNeeded on ha-095774-m04: state=Stopped err=<nil>
	W0815 01:06:01.052796 1464677 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:06:01.055600 1464677 out.go:177] * Restarting existing docker container for "ha-095774-m04" ...
	I0815 01:06:01.057942 1464677 cli_runner.go:164] Run: docker start ha-095774-m04
	I0815 01:06:01.396204 1464677 cli_runner.go:164] Run: docker container inspect ha-095774-m04 --format={{.State.Status}}
	I0815 01:06:01.421535 1464677 kic.go:430] container "ha-095774-m04" state is running.
	I0815 01:06:01.421903 1464677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-095774-m04
	I0815 01:06:01.446874 1464677 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/config.json ...
	I0815 01:06:01.447124 1464677 machine.go:94] provisionDockerMachine start ...
	I0815 01:06:01.447188 1464677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774-m04
	I0815 01:06:01.482708 1464677 main.go:141] libmachine: Using SSH client type: native
	I0815 01:06:01.482956 1464677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34670 <nil> <nil>}
	I0815 01:06:01.482970 1464677 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:06:01.484998 1464677 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0815 01:06:04.622692 1464677 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-095774-m04
	
	I0815 01:06:04.622722 1464677 ubuntu.go:169] provisioning hostname "ha-095774-m04"
	I0815 01:06:04.622788 1464677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774-m04
	I0815 01:06:04.641473 1464677 main.go:141] libmachine: Using SSH client type: native
	I0815 01:06:04.641716 1464677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34670 <nil> <nil>}
	I0815 01:06:04.641735 1464677 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-095774-m04 && echo "ha-095774-m04" | sudo tee /etc/hostname
	I0815 01:06:04.801562 1464677 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-095774-m04
	
	I0815 01:06:04.801643 1464677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774-m04
	I0815 01:06:04.826728 1464677 main.go:141] libmachine: Using SSH client type: native
	I0815 01:06:04.826973 1464677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34670 <nil> <nil>}
	I0815 01:06:04.826989 1464677 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-095774-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-095774-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-095774-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:06:04.970295 1464677 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:06:04.970326 1464677 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19443-1398913/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-1398913/.minikube}
	I0815 01:06:04.970343 1464677 ubuntu.go:177] setting up certificates
	I0815 01:06:04.970360 1464677 provision.go:84] configureAuth start
	I0815 01:06:04.970445 1464677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-095774-m04
	I0815 01:06:04.997678 1464677 provision.go:143] copyHostCerts
	I0815 01:06:04.997723 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.pem
	I0815 01:06:04.997758 1464677 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.pem, removing ...
	I0815 01:06:04.997769 1464677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.pem
	I0815 01:06:04.997848 1464677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.pem (1082 bytes)
	I0815 01:06:04.997937 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19443-1398913/.minikube/cert.pem
	I0815 01:06:04.997962 1464677 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-1398913/.minikube/cert.pem, removing ...
	I0815 01:06:04.997970 1464677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-1398913/.minikube/cert.pem
	I0815 01:06:04.997997 1464677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-1398913/.minikube/cert.pem (1123 bytes)
	I0815 01:06:04.998043 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19443-1398913/.minikube/key.pem
	I0815 01:06:04.998065 1464677 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-1398913/.minikube/key.pem, removing ...
	I0815 01:06:04.998074 1464677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-1398913/.minikube/key.pem
	I0815 01:06:04.998101 1464677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-1398913/.minikube/key.pem (1679 bytes)
	I0815 01:06:04.998156 1464677 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca-key.pem org=jenkins.ha-095774-m04 san=[127.0.0.1 192.168.49.5 ha-095774-m04 localhost minikube]
	I0815 01:06:05.490932 1464677 provision.go:177] copyRemoteCerts
	I0815 01:06:05.491010 1464677 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:06:05.491058 1464677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774-m04
	I0815 01:06:05.509228 1464677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34670 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/ha-095774-m04/id_rsa Username:docker}
	I0815 01:06:05.620969 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 01:06:05.621025 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 01:06:05.653186 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 01:06:05.653244 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 01:06:05.687400 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 01:06:05.687472 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 01:06:05.720955 1464677 provision.go:87] duration metric: took 750.573955ms to configureAuth
	I0815 01:06:05.720983 1464677 ubuntu.go:193] setting minikube options for container-runtime
	I0815 01:06:05.721241 1464677 config.go:182] Loaded profile config "ha-095774": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:06:05.721351 1464677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774-m04
	I0815 01:06:05.740304 1464677 main.go:141] libmachine: Using SSH client type: native
	I0815 01:06:05.740545 1464677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 34670 <nil> <nil>}
	I0815 01:06:05.740560 1464677 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:06:06.057262 1464677 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:06:06.057292 1464677 machine.go:97] duration metric: took 4.6101577s to provisionDockerMachine
	I0815 01:06:06.057311 1464677 start.go:293] postStartSetup for "ha-095774-m04" (driver="docker")
	I0815 01:06:06.057323 1464677 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:06:06.057396 1464677 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:06:06.057445 1464677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774-m04
	I0815 01:06:06.085200 1464677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34670 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/ha-095774-m04/id_rsa Username:docker}
	I0815 01:06:06.187709 1464677 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:06:06.191087 1464677 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0815 01:06:06.191125 1464677 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0815 01:06:06.191136 1464677 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0815 01:06:06.191142 1464677 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0815 01:06:06.191153 1464677 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-1398913/.minikube/addons for local assets ...
	I0815 01:06:06.191216 1464677 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-1398913/.minikube/files for local assets ...
	I0815 01:06:06.191297 1464677 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-1398913/.minikube/files/etc/ssl/certs/14042982.pem -> 14042982.pem in /etc/ssl/certs
	I0815 01:06:06.191308 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/files/etc/ssl/certs/14042982.pem -> /etc/ssl/certs/14042982.pem
	I0815 01:06:06.191409 1464677 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:06:06.200855 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/files/etc/ssl/certs/14042982.pem --> /etc/ssl/certs/14042982.pem (1708 bytes)
	I0815 01:06:06.229150 1464677 start.go:296] duration metric: took 171.823034ms for postStartSetup
	I0815 01:06:06.229233 1464677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 01:06:06.229285 1464677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774-m04
	I0815 01:06:06.252681 1464677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34670 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/ha-095774-m04/id_rsa Username:docker}
	I0815 01:06:06.348751 1464677 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0815 01:06:06.354301 1464677 fix.go:56] duration metric: took 5.318645355s for fixHost
	I0815 01:06:06.354325 1464677 start.go:83] releasing machines lock for "ha-095774-m04", held for 5.318701231s
	I0815 01:06:06.354408 1464677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-095774-m04
	I0815 01:06:06.376386 1464677 out.go:177] * Found network options:
	I0815 01:06:06.378865 1464677 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0815 01:06:06.381463 1464677 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 01:06:06.381485 1464677 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 01:06:06.381514 1464677 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 01:06:06.381525 1464677 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 01:06:06.381597 1464677 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:06:06.381638 1464677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774-m04
	I0815 01:06:06.381656 1464677 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:06:06.381719 1464677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774-m04
	I0815 01:06:06.413715 1464677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34670 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/ha-095774-m04/id_rsa Username:docker}
	I0815 01:06:06.423915 1464677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34670 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/ha-095774-m04/id_rsa Username:docker}
	I0815 01:06:06.708644 1464677 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 01:06:06.713494 1464677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:06:06.725574 1464677 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0815 01:06:06.725676 1464677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:06:06.735904 1464677 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0815 01:06:06.735931 1464677 start.go:495] detecting cgroup driver to use...
	I0815 01:06:06.735964 1464677 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0815 01:06:06.736017 1464677 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:06:06.752462 1464677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:06:06.767137 1464677 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:06:06.767216 1464677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:06:06.788677 1464677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:06:06.823164 1464677 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:06:06.953991 1464677 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:06:07.092483 1464677 docker.go:233] disabling docker service ...
	I0815 01:06:07.092566 1464677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:06:07.115369 1464677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:06:07.131676 1464677 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:06:07.261577 1464677 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:06:07.401248 1464677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:06:07.414315 1464677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:06:07.437947 1464677 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 01:06:07.438097 1464677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:06:07.450131 1464677 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:06:07.450245 1464677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:06:07.461117 1464677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:06:07.472200 1464677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:06:07.484237 1464677 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:06:07.496199 1464677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:06:07.507875 1464677 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:06:07.520363 1464677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:06:07.531677 1464677 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:06:07.546610 1464677 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:06:07.556103 1464677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:06:07.719472 1464677 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:06:07.891483 1464677 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:06:07.891625 1464677 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:06:07.896344 1464677 start.go:563] Will wait 60s for crictl version
	I0815 01:06:07.896453 1464677 ssh_runner.go:195] Run: which crictl
	I0815 01:06:07.900592 1464677 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:06:07.948780 1464677 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0815 01:06:07.948941 1464677 ssh_runner.go:195] Run: crio --version
	I0815 01:06:08.003762 1464677 ssh_runner.go:195] Run: crio --version
	I0815 01:06:08.073394 1464677 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0815 01:06:08.075997 1464677 out.go:177]   - env NO_PROXY=192.168.49.2
	I0815 01:06:08.078562 1464677 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0815 01:06:08.081091 1464677 cli_runner.go:164] Run: docker network inspect ha-095774 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0815 01:06:08.096966 1464677 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0815 01:06:08.101246 1464677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:06:08.115112 1464677 mustload.go:65] Loading cluster: ha-095774
	I0815 01:06:08.115375 1464677 config.go:182] Loaded profile config "ha-095774": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:06:08.115636 1464677 cli_runner.go:164] Run: docker container inspect ha-095774 --format={{.State.Status}}
	I0815 01:06:08.134046 1464677 host.go:66] Checking if "ha-095774" exists ...
	I0815 01:06:08.134327 1464677 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774 for IP: 192.168.49.5
	I0815 01:06:08.134335 1464677 certs.go:194] generating shared ca certs ...
	I0815 01:06:08.134349 1464677 certs.go:226] acquiring lock for ca certs: {Name:mk7828e60149aaf109ce40cae2b300a118fa9ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:06:08.134508 1464677 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.key
	I0815 01:06:08.134551 1464677 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/proxy-client-ca.key
	I0815 01:06:08.134562 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 01:06:08.134574 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 01:06:08.134586 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 01:06:08.134597 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 01:06:08.134652 1464677 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/1404298.pem (1338 bytes)
	W0815 01:06:08.134681 1464677 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/1404298_empty.pem, impossibly tiny 0 bytes
	I0815 01:06:08.134689 1464677 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:06:08.134715 1464677 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/ca.pem (1082 bytes)
	I0815 01:06:08.134737 1464677 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:06:08.134759 1464677 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/key.pem (1679 bytes)
	I0815 01:06:08.134802 1464677 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-1398913/.minikube/files/etc/ssl/certs/14042982.pem (1708 bytes)
	I0815 01:06:08.134831 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/1404298.pem -> /usr/share/ca-certificates/1404298.pem
	I0815 01:06:08.134842 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/files/etc/ssl/certs/14042982.pem -> /usr/share/ca-certificates/14042982.pem
	I0815 01:06:08.134856 1464677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:06:08.134874 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:06:08.165717 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:06:08.198673 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:06:08.231621 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 01:06:08.263926 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/certs/1404298.pem --> /usr/share/ca-certificates/1404298.pem (1338 bytes)
	I0815 01:06:08.300574 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/files/etc/ssl/certs/14042982.pem --> /usr/share/ca-certificates/14042982.pem (1708 bytes)
	I0815 01:06:08.330907 1464677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-1398913/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:06:08.360019 1464677 ssh_runner.go:195] Run: openssl version
	I0815 01:06:08.367447 1464677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14042982.pem && ln -fs /usr/share/ca-certificates/14042982.pem /etc/ssl/certs/14042982.pem"
	I0815 01:06:08.377909 1464677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14042982.pem
	I0815 01:06:08.382220 1464677 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:50 /usr/share/ca-certificates/14042982.pem
	I0815 01:06:08.382337 1464677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14042982.pem
	I0815 01:06:08.391568 1464677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14042982.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:06:08.404571 1464677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:06:08.417185 1464677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:06:08.421283 1464677 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:39 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:06:08.421352 1464677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:06:08.428858 1464677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:06:08.438679 1464677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1404298.pem && ln -fs /usr/share/ca-certificates/1404298.pem /etc/ssl/certs/1404298.pem"
	I0815 01:06:08.449258 1464677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1404298.pem
	I0815 01:06:08.453122 1464677 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:50 /usr/share/ca-certificates/1404298.pem
	I0815 01:06:08.453191 1464677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1404298.pem
	I0815 01:06:08.461279 1464677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1404298.pem /etc/ssl/certs/51391683.0"
	I0815 01:06:08.472364 1464677 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:06:08.476181 1464677 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 01:06:08.476229 1464677 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.31.0  false true} ...
	I0815 01:06:08.476322 1464677 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-095774-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-095774 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:06:08.476386 1464677 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 01:06:08.486900 1464677 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:06:08.487048 1464677 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0815 01:06:08.498257 1464677 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0815 01:06:08.521135 1464677 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:06:08.541442 1464677 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0815 01:06:08.545245 1464677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:06:08.557469 1464677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:06:08.660533 1464677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:06:08.675798 1464677 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0815 01:06:08.676242 1464677 config.go:182] Loaded profile config "ha-095774": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:06:08.679533 1464677 out.go:177] * Verifying Kubernetes components...
	I0815 01:06:08.682030 1464677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:06:08.785520 1464677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:06:08.805222 1464677 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19443-1398913/kubeconfig
	I0815 01:06:08.805500 1464677 kapi.go:59] client config for ha-095774: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/client.crt", KeyFile:"/home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/ha-095774/client.key", CAFile:"/home/jenkins/minikube-integration/19443-1398913/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19cadb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0815 01:06:08.805569 1464677 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0815 01:06:08.805793 1464677 node_ready.go:35] waiting up to 6m0s for node "ha-095774-m04" to be "Ready" ...
	I0815 01:06:08.805874 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m04
	I0815 01:06:08.805885 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:08.805893 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:08.805904 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:08.809584 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:06:09.305985 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m04
	I0815 01:06:09.306010 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:09.306020 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:09.306024 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:09.308890 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:09.806353 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m04
	I0815 01:06:09.806374 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:09.806383 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:09.806387 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:09.809409 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:10.306184 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m04
	I0815 01:06:10.306210 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:10.306220 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:10.306225 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:10.309259 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:06:10.806117 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m04
	I0815 01:06:10.806155 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:10.806169 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:10.806178 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:10.809681 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:06:10.810383 1464677 node_ready.go:53] node "ha-095774-m04" has status "Ready":"Unknown"
	I0815 01:06:11.306258 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m04
	I0815 01:06:11.306289 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:11.306299 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:11.306304 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:11.309090 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:11.806628 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m04
	I0815 01:06:11.806655 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:11.806664 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:11.806670 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:11.809510 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:12.306064 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m04
	I0815 01:06:12.306085 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:12.306094 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:12.306099 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:12.309154 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:06:12.806528 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m04
	I0815 01:06:12.806551 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:12.806561 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:12.806566 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:12.809384 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:12.810506 1464677 node_ready.go:53] node "ha-095774-m04" has status "Ready":"Unknown"
	I0815 01:06:13.306845 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m04
	I0815 01:06:13.306866 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:13.306875 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:13.306880 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:13.309660 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:13.806669 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m04
	I0815 01:06:13.806691 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:13.806705 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:13.806709 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:13.809563 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:14.305982 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m04
	I0815 01:06:14.306008 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:14.306017 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:14.306022 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:14.309021 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:14.806041 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m04
	I0815 01:06:14.806066 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:14.806074 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:14.806080 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:14.808943 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:15.306818 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m04
	I0815 01:06:15.306843 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:15.306852 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:15.306857 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:15.309729 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:15.310529 1464677 node_ready.go:49] node "ha-095774-m04" has status "Ready":"True"
	I0815 01:06:15.310551 1464677 node_ready.go:38] duration metric: took 6.504736893s for node "ha-095774-m04" to be "Ready" ...
	I0815 01:06:15.310562 1464677 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:06:15.310639 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0815 01:06:15.310645 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:15.310653 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:15.310658 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:15.315797 1464677 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 01:06:15.324944 1464677 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-b4vhd" in "kube-system" namespace to be "Ready" ...
	I0815 01:06:15.325073 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:15.325086 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:15.325095 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:15.325099 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:15.330988 1464677 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 01:06:15.331685 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:15.331698 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:15.331706 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:15.331712 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:15.334645 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:15.825252 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:15.825274 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:15.825283 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:15.825287 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:15.828470 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:06:15.829353 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:15.829376 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:15.829386 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:15.829390 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:15.832365 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:16.325550 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:16.325629 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:16.325652 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:16.325670 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:16.329527 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:06:16.330329 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:16.330354 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:16.330364 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:16.330368 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:16.333781 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:06:16.825699 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:16.825722 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:16.825732 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:16.825736 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:16.828937 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:06:16.829790 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:16.829816 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:16.829827 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:16.829831 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:16.832712 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:17.326155 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:17.326182 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:17.326192 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:17.326197 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:17.329407 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:06:17.330428 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:17.330450 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:17.330459 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:17.330463 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:17.333332 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:17.333959 1464677 pod_ready.go:102] pod "coredns-6f6b679f8f-b4vhd" in "kube-system" namespace has status "Ready":"False"
	I0815 01:06:17.826188 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:17.826215 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:17.826225 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:17.826229 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:17.829233 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:17.830076 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:17.830096 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:17.830107 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:17.830112 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:17.832909 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:18.325360 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:18.325385 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:18.325395 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:18.325400 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:18.329076 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:06:18.329873 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:18.329923 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:18.329942 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:18.329950 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:18.332642 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:18.826122 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:18.826149 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:18.826159 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:18.826165 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:18.829455 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:06:18.830490 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:18.830515 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:18.830525 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:18.830532 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:18.833697 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:06:19.325950 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:19.325973 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:19.325982 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:19.325986 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:19.329233 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:06:19.330228 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:19.330252 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:19.330261 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:19.330266 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:19.333046 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:19.825509 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:19.825532 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:19.825542 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:19.825547 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:19.828684 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:06:19.829661 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:19.829684 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:19.829693 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:19.829698 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:19.833456 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:06:19.834204 1464677 pod_ready.go:102] pod "coredns-6f6b679f8f-b4vhd" in "kube-system" namespace has status "Ready":"False"
	I0815 01:06:20.325267 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:20.325292 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:20.325302 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:20.325306 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:20.328293 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:20.329185 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:20.329238 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:20.329263 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:20.329283 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:20.332115 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:20.825516 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:20.825543 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:20.825553 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:20.825556 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:20.828505 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:20.829310 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:20.829331 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:20.829340 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:20.829345 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:20.831769 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:21.325793 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:21.325818 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:21.325827 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:21.325867 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:21.328957 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:06:21.329761 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:21.329785 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:21.329795 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:21.329800 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:21.334119 1464677 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 01:06:21.825199 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:21.825225 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:21.825236 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:21.825240 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:21.828256 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:21.829028 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:21.829048 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:21.829057 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:21.829063 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:21.831924 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:22.326017 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:22.326043 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:22.326052 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:22.326058 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:22.329039 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:22.330222 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:22.330245 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:22.330254 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:22.330257 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:22.332797 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:22.333454 1464677 pod_ready.go:102] pod "coredns-6f6b679f8f-b4vhd" in "kube-system" namespace has status "Ready":"False"
	I0815 01:06:22.825165 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:22.825191 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:22.825210 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:22.825216 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:22.827996 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:22.828646 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:22.828672 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:22.828681 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:22.828685 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:22.831038 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:23.326155 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:23.326177 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:23.326187 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:23.326191 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:23.329115 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:23.329775 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:23.329787 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:23.329796 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:23.329801 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:23.332669 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:23.825941 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:23.825970 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:23.825980 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:23.825983 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:23.828999 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:23.829820 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:23.829840 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:23.829850 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:23.829854 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:23.832473 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:24.325643 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:24.325670 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:24.325680 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:24.325685 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:24.328717 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:06:24.329842 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:24.329865 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:24.329874 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:24.329881 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:24.334300 1464677 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 01:06:24.335205 1464677 pod_ready.go:102] pod "coredns-6f6b679f8f-b4vhd" in "kube-system" namespace has status "Ready":"False"
	I0815 01:06:24.825808 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:24.825834 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:24.825844 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:24.825850 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:24.829699 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:06:24.830816 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:24.830839 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:24.830848 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:24.830854 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:24.834484 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:06:25.325279 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:25.325354 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:25.325390 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:25.325414 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:25.331946 1464677 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 01:06:25.332798 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:25.332820 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:25.332830 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:25.332834 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:25.335724 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:25.825796 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:25.825820 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:25.825830 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:25.825836 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:25.828773 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:25.829594 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:25.829613 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:25.829622 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:25.829626 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:25.832327 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:26.325530 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:26.325557 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:26.325567 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:26.325573 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:26.329798 1464677 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 01:06:26.330656 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:26.330676 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:26.330686 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:26.330707 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:26.333406 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:26.825158 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:26.825185 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:26.825195 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:26.825200 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:26.828380 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:06:26.829105 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:26.829126 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:26.829134 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:26.829137 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:26.831561 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:26.832184 1464677 pod_ready.go:102] pod "coredns-6f6b679f8f-b4vhd" in "kube-system" namespace has status "Ready":"False"
	I0815 01:06:27.325774 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:27.325799 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:27.325809 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:27.325814 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:27.329118 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:06:27.330082 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:27.330101 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:27.330111 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:27.330117 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:27.332510 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:27.825762 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:27.825794 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:27.825803 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:27.825807 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:27.828646 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:27.829539 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:27.829558 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:27.829569 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:27.829603 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:27.832243 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:28.325550 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:28.325575 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:28.325584 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:28.325589 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:28.329078 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:06:28.329964 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:28.329984 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:28.329995 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:28.330001 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:28.332613 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:28.825987 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:28.826015 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:28.826024 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:28.826030 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:28.828989 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:28.829814 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:28.829834 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:28.829842 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:28.829847 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:28.832488 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:28.833098 1464677 pod_ready.go:102] pod "coredns-6f6b679f8f-b4vhd" in "kube-system" namespace has status "Ready":"False"
	I0815 01:06:29.325326 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:29.325397 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:29.325422 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:29.325443 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:29.328457 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:29.329116 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:29.329127 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:29.329135 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:29.329140 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:29.331549 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:29.825941 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:29.826027 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:29.826041 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:29.826045 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:29.829210 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:06:29.829874 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:29.829884 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:29.829893 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:29.829898 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:29.832552 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:30.325320 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-b4vhd
	I0815 01:06:30.325348 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:30.325367 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:30.325371 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:30.333235 1464677 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0815 01:06:30.334084 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:30.334106 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:30.334116 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:30.334119 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:30.336863 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:30.337546 1464677 pod_ready.go:97] node "ha-095774" hosting pod "coredns-6f6b679f8f-b4vhd" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-095774" has status "Ready":"Unknown"
	I0815 01:06:30.337571 1464677 pod_ready.go:81] duration metric: took 15.012594146s for pod "coredns-6f6b679f8f-b4vhd" in "kube-system" namespace to be "Ready" ...
	E0815 01:06:30.337609 1464677 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-095774" hosting pod "coredns-6f6b679f8f-b4vhd" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-095774" has status "Ready":"Unknown"
	I0815 01:06:30.337623 1464677 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-tl9kf" in "kube-system" namespace to be "Ready" ...
	I0815 01:06:30.337704 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-tl9kf
	I0815 01:06:30.337714 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:30.337723 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:30.337730 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:30.340741 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:30.341838 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:30.341860 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:30.341870 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:30.341896 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:30.344631 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:30.345665 1464677 pod_ready.go:97] node "ha-095774" hosting pod "coredns-6f6b679f8f-tl9kf" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-095774" has status "Ready":"Unknown"
	I0815 01:06:30.345698 1464677 pod_ready.go:81] duration metric: took 8.067559ms for pod "coredns-6f6b679f8f-tl9kf" in "kube-system" namespace to be "Ready" ...
	E0815 01:06:30.345709 1464677 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-095774" hosting pod "coredns-6f6b679f8f-tl9kf" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-095774" has status "Ready":"Unknown"
	I0815 01:06:30.345716 1464677 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-095774" in "kube-system" namespace to be "Ready" ...
	I0815 01:06:30.345794 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-095774
	I0815 01:06:30.345806 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:30.345821 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:30.345826 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:30.348579 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:30.349514 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:30.349532 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:30.349542 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:30.349545 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:30.352343 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:30.353062 1464677 pod_ready.go:97] node "ha-095774" hosting pod "etcd-ha-095774" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-095774" has status "Ready":"Unknown"
	I0815 01:06:30.353090 1464677 pod_ready.go:81] duration metric: took 7.365559ms for pod "etcd-ha-095774" in "kube-system" namespace to be "Ready" ...
	E0815 01:06:30.353100 1464677 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-095774" hosting pod "etcd-ha-095774" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-095774" has status "Ready":"Unknown"
	I0815 01:06:30.353108 1464677 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-095774-m02" in "kube-system" namespace to be "Ready" ...
	I0815 01:06:30.353178 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-095774-m02
	I0815 01:06:30.353189 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:30.353197 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:30.353202 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:30.355997 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:30.356739 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m02
	I0815 01:06:30.356759 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:30.356769 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:30.356773 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:30.359457 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:30.360153 1464677 pod_ready.go:92] pod "etcd-ha-095774-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 01:06:30.360175 1464677 pod_ready.go:81] duration metric: took 7.054981ms for pod "etcd-ha-095774-m02" in "kube-system" namespace to be "Ready" ...
	I0815 01:06:30.360198 1464677 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-095774" in "kube-system" namespace to be "Ready" ...
	I0815 01:06:30.360269 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095774
	I0815 01:06:30.360280 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:30.360288 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:30.360292 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:30.363261 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:30.364312 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:30.364333 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:30.364342 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:30.364347 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:30.367068 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:30.367885 1464677 pod_ready.go:97] node "ha-095774" hosting pod "kube-apiserver-ha-095774" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-095774" has status "Ready":"Unknown"
	I0815 01:06:30.367914 1464677 pod_ready.go:81] duration metric: took 7.704962ms for pod "kube-apiserver-ha-095774" in "kube-system" namespace to be "Ready" ...
	E0815 01:06:30.367924 1464677 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-095774" hosting pod "kube-apiserver-ha-095774" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-095774" has status "Ready":"Unknown"
	I0815 01:06:30.367932 1464677 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-095774-m02" in "kube-system" namespace to be "Ready" ...
	I0815 01:06:30.526351 1464677 request.go:632] Waited for 158.310305ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095774-m02
	I0815 01:06:30.526439 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095774-m02
	I0815 01:06:30.526450 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:30.526458 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:30.526466 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:30.530840 1464677 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 01:06:30.726054 1464677 request.go:632] Waited for 194.336181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-095774-m02
	I0815 01:06:30.726174 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m02
	I0815 01:06:30.726210 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:30.726236 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:30.726254 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:30.730366 1464677 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 01:06:30.731229 1464677 pod_ready.go:92] pod "kube-apiserver-ha-095774-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 01:06:30.731253 1464677 pod_ready.go:81] duration metric: took 363.309214ms for pod "kube-apiserver-ha-095774-m02" in "kube-system" namespace to be "Ready" ...
	I0815 01:06:30.731280 1464677 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-095774" in "kube-system" namespace to be "Ready" ...
	I0815 01:06:30.926169 1464677 request.go:632] Waited for 194.812534ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-095774
	I0815 01:06:30.926294 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-095774
	I0815 01:06:30.926326 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:30.926341 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:30.926346 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:30.930016 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:06:31.126197 1464677 request.go:632] Waited for 195.385132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:31.126289 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:31.126325 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:31.126341 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:31.126352 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:31.130671 1464677 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 01:06:31.131331 1464677 pod_ready.go:97] node "ha-095774" hosting pod "kube-controller-manager-ha-095774" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-095774" has status "Ready":"Unknown"
	I0815 01:06:31.131355 1464677 pod_ready.go:81] duration metric: took 400.063379ms for pod "kube-controller-manager-ha-095774" in "kube-system" namespace to be "Ready" ...
	E0815 01:06:31.131387 1464677 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-095774" hosting pod "kube-controller-manager-ha-095774" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-095774" has status "Ready":"Unknown"
	I0815 01:06:31.131395 1464677 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-095774-m02" in "kube-system" namespace to be "Ready" ...
	I0815 01:06:31.326054 1464677 request.go:632] Waited for 194.581809ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-095774-m02
	I0815 01:06:31.326187 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-095774-m02
	I0815 01:06:31.326198 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:31.326206 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:31.326212 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:31.329076 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:31.525705 1464677 request.go:632] Waited for 195.345863ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-095774-m02
	I0815 01:06:31.525820 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m02
	I0815 01:06:31.525856 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:31.525882 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:31.525901 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:31.529966 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:06:31.530642 1464677 pod_ready.go:92] pod "kube-controller-manager-ha-095774-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 01:06:31.530664 1464677 pod_ready.go:81] duration metric: took 399.257077ms for pod "kube-controller-manager-ha-095774-m02" in "kube-system" namespace to be "Ready" ...
	I0815 01:06:31.530675 1464677 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p5kcz" in "kube-system" namespace to be "Ready" ...
	I0815 01:06:31.726173 1464677 request.go:632] Waited for 195.430999ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p5kcz
	I0815 01:06:31.726231 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p5kcz
	I0815 01:06:31.726240 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:31.726249 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:31.726260 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:31.729937 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:06:31.926017 1464677 request.go:632] Waited for 195.322372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-095774-m04
	I0815 01:06:31.926077 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m04
	I0815 01:06:31.926083 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:31.926092 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:31.926099 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:31.929897 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:06:31.930530 1464677 pod_ready.go:92] pod "kube-proxy-p5kcz" in "kube-system" namespace has status "Ready":"True"
	I0815 01:06:31.930553 1464677 pod_ready.go:81] duration metric: took 399.869815ms for pod "kube-proxy-p5kcz" in "kube-system" namespace to be "Ready" ...
	I0815 01:06:31.930566 1464677 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qfv9m" in "kube-system" namespace to be "Ready" ...
	I0815 01:06:32.125595 1464677 request.go:632] Waited for 194.928571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qfv9m
	I0815 01:06:32.125687 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qfv9m
	I0815 01:06:32.125699 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:32.125708 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:32.125714 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:32.130008 1464677 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 01:06:32.325436 1464677 request.go:632] Waited for 194.27165ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-095774-m02
	I0815 01:06:32.325510 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m02
	I0815 01:06:32.325517 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:32.325525 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:32.325529 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:32.329324 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:06:32.329831 1464677 pod_ready.go:92] pod "kube-proxy-qfv9m" in "kube-system" namespace has status "Ready":"True"
	I0815 01:06:32.329852 1464677 pod_ready.go:81] duration metric: took 399.277838ms for pod "kube-proxy-qfv9m" in "kube-system" namespace to be "Ready" ...
	I0815 01:06:32.329864 1464677 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sdkx7" in "kube-system" namespace to be "Ready" ...
	I0815 01:06:32.525930 1464677 request.go:632] Waited for 195.93556ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sdkx7
	I0815 01:06:32.526003 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sdkx7
	I0815 01:06:32.526014 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:32.526029 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:32.526053 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:32.530977 1464677 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 01:06:32.725924 1464677 request.go:632] Waited for 194.323194ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:32.725988 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:32.725997 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:32.726010 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:32.726017 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:32.731291 1464677 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 01:06:32.732247 1464677 pod_ready.go:97] node "ha-095774" hosting pod "kube-proxy-sdkx7" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-095774" has status "Ready":"Unknown"
	I0815 01:06:32.732270 1464677 pod_ready.go:81] duration metric: took 402.398358ms for pod "kube-proxy-sdkx7" in "kube-system" namespace to be "Ready" ...
	E0815 01:06:32.732281 1464677 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-095774" hosting pod "kube-proxy-sdkx7" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-095774" has status "Ready":"Unknown"
	I0815 01:06:32.732288 1464677 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-095774" in "kube-system" namespace to be "Ready" ...
	I0815 01:06:32.925797 1464677 request.go:632] Waited for 193.443663ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-095774
	I0815 01:06:32.925871 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-095774
	I0815 01:06:32.925883 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:32.925891 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:32.925909 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:32.929965 1464677 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 01:06:33.125978 1464677 request.go:632] Waited for 195.373031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:33.126063 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774
	I0815 01:06:33.126076 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:33.126085 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:33.126092 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:33.129247 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:06:33.130292 1464677 pod_ready.go:97] node "ha-095774" hosting pod "kube-scheduler-ha-095774" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-095774" has status "Ready":"Unknown"
	I0815 01:06:33.130374 1464677 pod_ready.go:81] duration metric: took 398.075068ms for pod "kube-scheduler-ha-095774" in "kube-system" namespace to be "Ready" ...
	E0815 01:06:33.130451 1464677 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-095774" hosting pod "kube-scheduler-ha-095774" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-095774" has status "Ready":"Unknown"
	I0815 01:06:33.130478 1464677 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-095774-m02" in "kube-system" namespace to be "Ready" ...
	I0815 01:06:33.326283 1464677 request.go:632] Waited for 195.697845ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-095774-m02
	I0815 01:06:33.326449 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-095774-m02
	I0815 01:06:33.326481 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:33.326505 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:33.326524 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:33.329366 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:33.525650 1464677 request.go:632] Waited for 195.334574ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-095774-m02
	I0815 01:06:33.525707 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-095774-m02
	I0815 01:06:33.525719 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:33.525727 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:33.525738 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:33.528523 1464677 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 01:06:33.529813 1464677 pod_ready.go:92] pod "kube-scheduler-ha-095774-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 01:06:33.529880 1464677 pod_ready.go:81] duration metric: took 399.376546ms for pod "kube-scheduler-ha-095774-m02" in "kube-system" namespace to be "Ready" ...
	I0815 01:06:33.529909 1464677 pod_ready.go:38] duration metric: took 18.219335696s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:06:33.529954 1464677 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 01:06:33.530046 1464677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:06:33.543605 1464677 system_svc.go:56] duration metric: took 13.643228ms WaitForService to wait for kubelet
	I0815 01:06:33.543640 1464677 kubeadm.go:582] duration metric: took 24.867795519s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:06:33.543661 1464677 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:06:33.726010 1464677 request.go:632] Waited for 182.278106ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0815 01:06:33.726084 1464677 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0815 01:06:33.726094 1464677 round_trippers.go:469] Request Headers:
	I0815 01:06:33.726102 1464677 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0815 01:06:33.726109 1464677 round_trippers.go:473]     Accept: application/json, */*
	I0815 01:06:33.729934 1464677 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 01:06:33.731223 1464677 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0815 01:06:33.731254 1464677 node_conditions.go:123] node cpu capacity is 2
	I0815 01:06:33.731266 1464677 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0815 01:06:33.731271 1464677 node_conditions.go:123] node cpu capacity is 2
	I0815 01:06:33.731276 1464677 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0815 01:06:33.731281 1464677 node_conditions.go:123] node cpu capacity is 2
	I0815 01:06:33.731286 1464677 node_conditions.go:105] duration metric: took 187.620727ms to run NodePressure ...
	I0815 01:06:33.731299 1464677 start.go:241] waiting for startup goroutines ...
	I0815 01:06:33.731329 1464677 start.go:255] writing updated cluster config ...
	I0815 01:06:33.731656 1464677 ssh_runner.go:195] Run: rm -f paused
	I0815 01:06:33.796943 1464677 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 01:06:33.801545 1464677 out.go:177] * Done! kubectl is now configured to use "ha-095774" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 15 01:05:59 ha-095774 crio[639]: time="2024-08-15 01:05:59.894268155Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/14730ff6887706b837022dd2a4804c92ecf48ae039f4adc77466c1e0b05b3803/merged/etc/group: no such file or directory"
	Aug 15 01:05:59 ha-095774 crio[639]: time="2024-08-15 01:05:59.944644521Z" level=info msg="Created container fb0bea1ec4973538cd5a02a5d0097024a307d0bfa255c1cb9e4733ac3d42ca21: kube-system/storage-provisioner/storage-provisioner" id=17a4be4c-17c4-4fc4-b4db-b534f102896a name=/runtime.v1.RuntimeService/CreateContainer
	Aug 15 01:05:59 ha-095774 crio[639]: time="2024-08-15 01:05:59.945363513Z" level=info msg="Starting container: fb0bea1ec4973538cd5a02a5d0097024a307d0bfa255c1cb9e4733ac3d42ca21" id=232b4aa9-d8dc-4028-8069-869c1909d231 name=/runtime.v1.RuntimeService/StartContainer
	Aug 15 01:05:59 ha-095774 crio[639]: time="2024-08-15 01:05:59.951412859Z" level=info msg="Started container" PID=1839 containerID=fb0bea1ec4973538cd5a02a5d0097024a307d0bfa255c1cb9e4733ac3d42ca21 description=kube-system/storage-provisioner/storage-provisioner id=232b4aa9-d8dc-4028-8069-869c1909d231 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ca5e37fb224aa210a9369f00879e99575f91a0960721d32db61f3b16113e99e8
	Aug 15 01:06:04 ha-095774 crio[639]: time="2024-08-15 01:06:04.679842923Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.0" id=4d900c15-da64-4597-a408-3d1f15143c15 name=/runtime.v1.ImageService/ImageStatus
	Aug 15 01:06:04 ha-095774 crio[639]: time="2024-08-15 01:06:04.680072952Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.0],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:ed8613b19e25d56d25e9ba0d83fd1e14f8ba070cb80e2674ba62ded55e260a9c registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d],Size_:86930758,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=4d900c15-da64-4597-a408-3d1f15143c15 name=/runtime.v1.ImageService/ImageStatus
	Aug 15 01:06:04 ha-095774 crio[639]: time="2024-08-15 01:06:04.680803611Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.0" id=b636ef63-d705-48bd-bd16-a1aac2dd5a49 name=/runtime.v1.ImageService/ImageStatus
	Aug 15 01:06:04 ha-095774 crio[639]: time="2024-08-15 01:06:04.680988955Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.0],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:ed8613b19e25d56d25e9ba0d83fd1e14f8ba070cb80e2674ba62ded55e260a9c registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d],Size_:86930758,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=b636ef63-d705-48bd-bd16-a1aac2dd5a49 name=/runtime.v1.ImageService/ImageStatus
	Aug 15 01:06:04 ha-095774 crio[639]: time="2024-08-15 01:06:04.681771184Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-095774/kube-controller-manager" id=439574ef-485b-42b3-a707-67af52a1b131 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 15 01:06:04 ha-095774 crio[639]: time="2024-08-15 01:06:04.681862834Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 15 01:06:04 ha-095774 crio[639]: time="2024-08-15 01:06:04.782126251Z" level=info msg="Created container f1db918713f2f9d7aa95d8348fbefde03b2e26e69b94c773d4e0aed9dae55fb0: kube-system/kube-controller-manager-ha-095774/kube-controller-manager" id=439574ef-485b-42b3-a707-67af52a1b131 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 15 01:06:04 ha-095774 crio[639]: time="2024-08-15 01:06:04.783690946Z" level=info msg="Starting container: f1db918713f2f9d7aa95d8348fbefde03b2e26e69b94c773d4e0aed9dae55fb0" id=5816b83e-4826-4225-9bdf-b0a87d0af422 name=/runtime.v1.RuntimeService/StartContainer
	Aug 15 01:06:04 ha-095774 crio[639]: time="2024-08-15 01:06:04.813318612Z" level=info msg="Started container" PID=1880 containerID=f1db918713f2f9d7aa95d8348fbefde03b2e26e69b94c773d4e0aed9dae55fb0 description=kube-system/kube-controller-manager-ha-095774/kube-controller-manager id=5816b83e-4826-4225-9bdf-b0a87d0af422 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a9ec704831fa86d942daea04eb444966226771095a2d6ad9b3afadc551663012
	Aug 15 01:06:09 ha-095774 crio[639]: time="2024-08-15 01:06:09.840283286Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Aug 15 01:06:09 ha-095774 crio[639]: time="2024-08-15 01:06:09.848383804Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 15 01:06:09 ha-095774 crio[639]: time="2024-08-15 01:06:09.848427447Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 15 01:06:09 ha-095774 crio[639]: time="2024-08-15 01:06:09.848444267Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Aug 15 01:06:09 ha-095774 crio[639]: time="2024-08-15 01:06:09.852538210Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 15 01:06:09 ha-095774 crio[639]: time="2024-08-15 01:06:09.852573032Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 15 01:06:09 ha-095774 crio[639]: time="2024-08-15 01:06:09.852588991Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Aug 15 01:06:09 ha-095774 crio[639]: time="2024-08-15 01:06:09.856493668Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 15 01:06:09 ha-095774 crio[639]: time="2024-08-15 01:06:09.856533200Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 15 01:06:09 ha-095774 crio[639]: time="2024-08-15 01:06:09.856554369Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Aug 15 01:06:09 ha-095774 crio[639]: time="2024-08-15 01:06:09.859922435Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 15 01:06:09 ha-095774 crio[639]: time="2024-08-15 01:06:09.859963099Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f1db918713f2f       fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd   31 seconds ago       Running             kube-controller-manager   8                   a9ec704831fa8       kube-controller-manager-ha-095774
	fb0bea1ec4973       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   36 seconds ago       Running             storage-provisioner       4                   ca5e37fb224aa       storage-provisioner
	6b1a0fef53ed6       7e2a4e229620ba3a757dc3699d10e8f77c453b7ee71936521668dec51669679d   42 seconds ago       Running             kube-vip                  3                   17431e1962c86       kube-vip-ha-095774
	07599592774c7       cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388   46 seconds ago       Running             kube-apiserver            4                   59affd6504ca7       kube-apiserver-ha-095774
	7f4ae7497d25d       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   About a minute ago   Running             busybox                   2                   318e254794aaa       busybox-7dff88458-jhcdf
	33c40481d510c       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93   About a minute ago   Running             coredns                   2                   eb276c285af77       coredns-6f6b679f8f-b4vhd
	2035dcff26d18       71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89   About a minute ago   Running             kube-proxy                2                   3aedc89918158       kube-proxy-sdkx7
	04feafb9fa0f2       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51   About a minute ago   Running             kindnet-cni               2                   265f5e91ef02d       kindnet-lgfzw
	5ca0bb9d4caa2       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   About a minute ago   Exited              storage-provisioner       3                   ca5e37fb224aa       storage-provisioner
	7316fcba1891d       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93   About a minute ago   Running             coredns                   2                   3ca170b093c38       coredns-6f6b679f8f-tl9kf
	b81bd8b52d79f       fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd   About a minute ago   Exited              kube-controller-manager   7                   a9ec704831fa8       kube-controller-manager-ha-095774
	bb1988e2d0ca6       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da   About a minute ago   Running             etcd                      2                   3e9f3d0db250b       etcd-ha-095774
	833a092eebd5c       cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388   About a minute ago   Exited              kube-apiserver            3                   59affd6504ca7       kube-apiserver-ha-095774
	084fa61fa2747       fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb   About a minute ago   Running             kube-scheduler            2                   8ec2767a47222       kube-scheduler-ha-095774
	9f29364856286       7e2a4e229620ba3a757dc3699d10e8f77c453b7ee71936521668dec51669679d   About a minute ago   Exited              kube-vip                  2                   17431e1962c86       kube-vip-ha-095774
	
	
	==> coredns [33c40481d510c91877b124495bce26c189bb80775dbe979590f2f3ae137a1eba] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58471 - 42553 "HINFO IN 484443966048362268.8594607797531105334. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.033180859s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1451387649]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 01:05:29.640) (total time: 30000ms):
	Trace[1451387649]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (01:05:59.640)
	Trace[1451387649]: [30.000795044s] [30.000795044s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[196508832]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 01:05:29.640) (total time: 30000ms):
	Trace[196508832]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (01:05:59.641)
	Trace[196508832]: [30.000474276s] [30.000474276s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[960476369]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 01:05:29.641) (total time: 30000ms):
	Trace[960476369]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (01:05:59.641)
	Trace[960476369]: [30.000609158s] [30.000609158s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [7316fcba1891d1a6af53e8b53aaec454d38eedc3fe08dc3c8777c18d82cd549b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57233 - 57601 "HINFO IN 2200752628864756858.1194019574353654822. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023038654s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[310915737]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 01:05:29.675) (total time: 30001ms):
	Trace[310915737]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (01:05:59.676)
	Trace[310915737]: [30.001364363s] [30.001364363s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1883510402]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 01:05:29.676) (total time: 30000ms):
	Trace[1883510402]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (01:05:59.676)
	Trace[1883510402]: [30.000364028s] [30.000364028s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2102823899]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 01:05:29.676) (total time: 30000ms):
	Trace[2102823899]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (01:05:59.677)
	Trace[2102823899]: [30.000247853s] [30.000247853s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-095774
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-095774
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=ha-095774
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T00_54_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 00:54:30 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-095774
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 01:05:45 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 15 Aug 2024 01:05:15 +0000   Thu, 15 Aug 2024 01:06:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 15 Aug 2024 01:05:15 +0000   Thu, 15 Aug 2024 01:06:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 15 Aug 2024 01:05:15 +0000   Thu, 15 Aug 2024 01:06:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 15 Aug 2024 01:05:15 +0000   Thu, 15 Aug 2024 01:06:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-095774
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022372Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022372Ki
	  pods:               110
	System Info:
	  Machine ID:                 b20b1fbc0496428d93547f43abf59910
	  System UUID:                b2a294bd-bcf9-4150-8837-954fc68bdd9a
	  Boot ID:                    a45aa34f-c9ce-4e83-8881-7d8273e4eb81
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-jhcdf              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m35s
	  kube-system                 coredns-6f6b679f8f-b4vhd             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     12m
	  kube-system                 coredns-6f6b679f8f-tl9kf             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     12m
	  kube-system                 etcd-ha-095774                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-lgfzw                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-095774             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-095774    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-sdkx7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-095774             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-095774                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m28s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             290Mi (3%!)(MISSING)  390Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 11m                    kube-proxy       
	  Normal   Starting                 66s                    kube-proxy       
	  Normal   Starting                 6m26s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-095774 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 12m                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-095774 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-095774 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-095774 event: Registered Node ha-095774 in Controller
	  Normal   NodeReady                11m                    kubelet          Node ha-095774 status is now: NodeReady
	  Normal   RegisteredNode           11m                    node-controller  Node ha-095774 event: Registered Node ha-095774 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-095774 event: Registered Node ha-095774 in Controller
	  Normal   RegisteredNode           7m31s                  node-controller  Node ha-095774 event: Registered Node ha-095774 in Controller
	  Normal   NodeHasSufficientPID     6m53s (x7 over 6m53s)  kubelet          Node ha-095774 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    6m53s (x8 over 6m53s)  kubelet          Node ha-095774 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  6m53s (x8 over 6m53s)  kubelet          Node ha-095774 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 6m53s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 6m53s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           6m20s                  node-controller  Node ha-095774 event: Registered Node ha-095774 in Controller
	  Normal   RegisteredNode           5m18s                  node-controller  Node ha-095774 event: Registered Node ha-095774 in Controller
	  Normal   RegisteredNode           3m40s                  node-controller  Node ha-095774 event: Registered Node ha-095774 in Controller
	  Normal   Starting                 119s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 119s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  119s (x8 over 119s)    kubelet          Node ha-095774 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    119s (x8 over 119s)    kubelet          Node ha-095774 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     119s (x7 over 119s)    kubelet          Node ha-095774 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           82s                    node-controller  Node ha-095774 event: Registered Node ha-095774 in Controller
	  Normal   RegisteredNode           27s                    node-controller  Node ha-095774 event: Registered Node ha-095774 in Controller
	  Normal   NodeNotReady             7s                     node-controller  Node ha-095774 status is now: NodeNotReady
	
	
	Name:               ha-095774-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-095774-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=ha-095774
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T00_55_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 00:54:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-095774-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 01:06:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 01:05:17 +0000   Thu, 15 Aug 2024 00:54:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 01:05:17 +0000   Thu, 15 Aug 2024 00:54:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 01:05:17 +0000   Thu, 15 Aug 2024 00:54:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 01:05:17 +0000   Thu, 15 Aug 2024 00:55:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-095774-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022372Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022372Ki
	  pods:               110
	System Info:
	  Machine ID:                 940e7cac3ba847c2af00f963ab523cfb
	  System UUID:                f93264dd-6e04-4254-9ee2-56ed2fa10f0e
	  Boot ID:                    a45aa34f-c9ce-4e83-8881-7d8273e4eb81
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kktjf                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m35s
	  kube-system                 etcd-ha-095774-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-wkfn6                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-095774-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-095774-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-qfv9m                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-095774-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-095774-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (1%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 11m                    kube-proxy       
	  Normal   Starting                 7m51s                  kube-proxy       
	  Normal   Starting                 6m25s                  kube-proxy       
	  Normal   Starting                 75s                    kube-proxy       
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)      kubelet          Node ha-095774-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-095774-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-095774-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           11m                    node-controller  Node ha-095774-m02 event: Registered Node ha-095774-m02 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-095774-m02 event: Registered Node ha-095774-m02 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-095774-m02 event: Registered Node ha-095774-m02 in Controller
	  Normal   NodeHasSufficientPID     8m13s (x7 over 8m13s)  kubelet          Node ha-095774-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    8m13s (x8 over 8m13s)  kubelet          Node ha-095774-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 8m13s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m13s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  8m13s (x8 over 8m13s)  kubelet          Node ha-095774-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           7m31s                  node-controller  Node ha-095774-m02 event: Registered Node ha-095774-m02 in Controller
	  Normal   NodeHasSufficientMemory  6m50s (x8 over 6m50s)  kubelet          Node ha-095774-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     6m50s (x7 over 6m50s)  kubelet          Node ha-095774-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    6m50s (x8 over 6m50s)  kubelet          Node ha-095774-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 6m50s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m50s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   RegisteredNode           6m20s                  node-controller  Node ha-095774-m02 event: Registered Node ha-095774-m02 in Controller
	  Normal   RegisteredNode           5m18s                  node-controller  Node ha-095774-m02 event: Registered Node ha-095774-m02 in Controller
	  Normal   RegisteredNode           3m40s                  node-controller  Node ha-095774-m02 event: Registered Node ha-095774-m02 in Controller
	  Normal   Starting                 117s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 117s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  117s (x8 over 117s)    kubelet          Node ha-095774-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    117s (x8 over 117s)    kubelet          Node ha-095774-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     117s (x7 over 117s)    kubelet          Node ha-095774-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           82s                    node-controller  Node ha-095774-m02 event: Registered Node ha-095774-m02 in Controller
	  Normal   RegisteredNode           27s                    node-controller  Node ha-095774-m02 event: Registered Node ha-095774-m02 in Controller
	
	
	Name:               ha-095774-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-095774-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=ha-095774
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T00_57_26_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 00:57:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-095774-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 01:06:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 01:06:15 +0000   Thu, 15 Aug 2024 01:06:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 01:06:15 +0000   Thu, 15 Aug 2024 01:06:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 01:06:15 +0000   Thu, 15 Aug 2024 01:06:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 01:06:15 +0000   Thu, 15 Aug 2024 01:06:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-095774-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022372Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022372Ki
	  pods:               110
	System Info:
	  Machine ID:                 2aca4fb59c91427e8220493e6d316307
	  System UUID:                a80c6bf8-2eae-46c6-aab2-72902ef060d3
	  Boot ID:                    a45aa34f-c9ce-4e83-8881-7d8273e4eb81
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7h94p    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m55s
	  kube-system                 kindnet-dbrtf              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9m11s
	  kube-system                 kube-proxy-p5kcz           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m59s                  kube-proxy       
	  Normal   Starting                 11s                    kube-proxy       
	  Normal   Starting                 9m8s                   kube-proxy       
	  Normal   NodeHasNoDiskPressure    9m11s (x2 over 9m11s)  kubelet          Node ha-095774-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m11s (x2 over 9m11s)  kubelet          Node ha-095774-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  9m11s (x2 over 9m11s)  kubelet          Node ha-095774-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           9m8s                   node-controller  Node ha-095774-m04 event: Registered Node ha-095774-m04 in Controller
	  Normal   RegisteredNode           9m8s                   node-controller  Node ha-095774-m04 event: Registered Node ha-095774-m04 in Controller
	  Normal   RegisteredNode           9m6s                   node-controller  Node ha-095774-m04 event: Registered Node ha-095774-m04 in Controller
	  Normal   NodeReady                8m55s                  kubelet          Node ha-095774-m04 status is now: NodeReady
	  Normal   RegisteredNode           7m31s                  node-controller  Node ha-095774-m04 event: Registered Node ha-095774-m04 in Controller
	  Normal   RegisteredNode           6m20s                  node-controller  Node ha-095774-m04 event: Registered Node ha-095774-m04 in Controller
	  Normal   NodeNotReady             5m40s                  node-controller  Node ha-095774-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           5m18s                  node-controller  Node ha-095774-m04 event: Registered Node ha-095774-m04 in Controller
	  Normal   RegisteredNode           3m40s                  node-controller  Node ha-095774-m04 event: Registered Node ha-095774-m04 in Controller
	  Normal   Starting                 3m16s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m16s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     3m9s (x7 over 3m16s)   kubelet          Node ha-095774-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    3m3s (x8 over 3m16s)   kubelet          Node ha-095774-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  3m3s (x8 over 3m16s)   kubelet          Node ha-095774-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           82s                    node-controller  Node ha-095774-m04 event: Registered Node ha-095774-m04 in Controller
	  Normal   NodeNotReady             42s                    node-controller  Node ha-095774-m04 status is now: NodeNotReady
	  Normal   Starting                 34s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 34s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     28s (x7 over 34s)      kubelet          Node ha-095774-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           27s                    node-controller  Node ha-095774-m04 event: Registered Node ha-095774-m04 in Controller
	  Normal   NodeHasNoDiskPressure    21s (x8 over 34s)      kubelet          Node ha-095774-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  21s (x8 over 34s)      kubelet          Node ha-095774-m04 status is now: NodeHasSufficientMemory
	
	
	==> dmesg <==
	[Aug15 00:11] overlayfs: '/var/lib/containers/storage/overlay/l/ZLTOCNGE2IGM6DT7VP2QP7OV3M' not a directory
	[  +0.606282] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [bb1988e2d0ca6d33c7b1585bf949e28acc2e0ca8fd4bf31d791143bd21062c97] <==
	{"level":"info","ts":"2024-08-15T01:05:09.075605Z","caller":"traceutil/trace.go:171","msg":"trace[937993855] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; }","duration":"6.671801261s","start":"2024-08-15T01:05:02.403799Z","end":"2024-08-15T01:05:09.075600Z","steps":["trace[937993855] 'agreement among raft nodes before linearized reading'  (duration: 6.662899431s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T01:05:09.075629Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T01:05:02.403754Z","time spent":"6.671868666s","remote":"127.0.0.1:48842","response type":"/etcdserverpb.KV/Range","request count":0,"request size":87,"response count":0,"response size":0,"request content":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" limit:500 "}
	{"level":"info","ts":"2024-08-15T01:05:09.075660Z","caller":"traceutil/trace.go:171","msg":"trace[1501453654] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; }","duration":"6.544261405s","start":"2024-08-15T01:05:02.531395Z","end":"2024-08-15T01:05:09.075656Z","steps":["trace[1501453654] 'agreement among raft nodes before linearized reading'  (duration: 6.53531558s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T01:05:09.075681Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T01:05:02.531378Z","time spent":"6.544296671s","remote":"127.0.0.1:48802","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":0,"response size":0,"request content":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" limit:10000 "}
	{"level":"info","ts":"2024-08-15T01:05:09.075700Z","caller":"traceutil/trace.go:171","msg":"trace[1335299554] range","detail":"{range_begin:/registry/ingressclasses/; range_end:/registry/ingressclasses0; }","duration":"6.683833301s","start":"2024-08-15T01:05:02.391863Z","end":"2024-08-15T01:05:09.075696Z","steps":["trace[1335299554] 'agreement among raft nodes before linearized reading'  (duration: 6.674853786s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T01:05:09.075728Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T01:05:02.391830Z","time spent":"6.683884499s","remote":"127.0.0.1:48680","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" limit:500 "}
	{"level":"info","ts":"2024-08-15T01:05:09.075752Z","caller":"traceutil/trace.go:171","msg":"trace[156387522] range","detail":"{range_begin:/registry/csinodes/; range_end:/registry/csinodes0; }","duration":"6.544379977s","start":"2024-08-15T01:05:02.531368Z","end":"2024-08-15T01:05:09.075748Z","steps":["trace[156387522] 'agreement among raft nodes before linearized reading'  (duration: 6.535355071s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T01:05:09.075779Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T01:05:02.531345Z","time spent":"6.54442467s","remote":"127.0.0.1:48760","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":0,"response size":0,"request content":"key:\"/registry/csinodes/\" range_end:\"/registry/csinodes0\" limit:10000 "}
	{"level":"info","ts":"2024-08-15T01:05:09.075815Z","caller":"traceutil/trace.go:171","msg":"trace[484941535] range","detail":"{range_begin:/registry/limitranges/; range_end:/registry/limitranges0; }","duration":"6.695586352s","start":"2024-08-15T01:05:02.380223Z","end":"2024-08-15T01:05:09.075809Z","steps":["trace[484941535] 'agreement among raft nodes before linearized reading'  (duration: 6.686506226s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T01:05:09.075837Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T01:05:02.380193Z","time spent":"6.695637591s","remote":"127.0.0.1:48536","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" limit:500 "}
	{"level":"info","ts":"2024-08-15T01:05:09.075861Z","caller":"traceutil/trace.go:171","msg":"trace[4110670] range","detail":"{range_begin:/registry/jobs/; range_end:/registry/jobs0; }","duration":"6.544661549s","start":"2024-08-15T01:05:02.531196Z","end":"2024-08-15T01:05:09.075857Z","steps":["trace[4110670] 'agreement among raft nodes before linearized reading'  (duration: 6.535544911s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T01:05:09.075888Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T01:05:02.531159Z","time spent":"6.54472435s","remote":"127.0.0.1:48620","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":0,"response size":0,"request content":"key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:10000 "}
	{"level":"info","ts":"2024-08-15T01:05:09.075930Z","caller":"traceutil/trace.go:171","msg":"trace[1232063033] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; }","duration":"6.719245641s","start":"2024-08-15T01:05:02.356658Z","end":"2024-08-15T01:05:09.075903Z","steps":["trace[1232063033] 'agreement among raft nodes before linearized reading'  (duration: 6.710084335s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T01:05:09.075961Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T01:05:02.356618Z","time spent":"6.719327823s","remote":"127.0.0.1:48802","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":0,"response size":0,"request content":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" limit:500 "}
	{"level":"info","ts":"2024-08-15T01:05:09.075989Z","caller":"traceutil/trace.go:171","msg":"trace[1017657022] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; }","duration":"6.546993112s","start":"2024-08-15T01:05:02.528992Z","end":"2024-08-15T01:05:09.075985Z","steps":["trace[1017657022] 'agreement among raft nodes before linearized reading'  (duration: 6.537761004s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T01:05:09.076009Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T01:05:02.528980Z","time spent":"6.547023782s","remote":"127.0.0.1:48748","response type":"/etcdserverpb.KV/Range","request count":0,"request size":63,"response count":0,"response size":0,"request content":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" limit:10000 "}
	{"level":"info","ts":"2024-08-15T01:05:09.076048Z","caller":"traceutil/trace.go:171","msg":"trace[559665840] range","detail":"{range_begin:/registry/storageclasses/; range_end:/registry/storageclasses0; }","duration":"6.780023258s","start":"2024-08-15T01:05:02.296017Z","end":"2024-08-15T01:05:09.076040Z","steps":["trace[559665840] 'agreement among raft nodes before linearized reading'  (duration: 6.770738556s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T01:05:09.076070Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T01:05:02.295967Z","time spent":"6.780097809s","remote":"127.0.0.1:48738","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" limit:500 "}
	{"level":"info","ts":"2024-08-15T01:05:09.076090Z","caller":"traceutil/trace.go:171","msg":"trace[132155171] range","detail":"{range_begin:; range_end:; }","duration":"7.570521511s","start":"2024-08-15T01:05:01.505564Z","end":"2024-08-15T01:05:09.076085Z","steps":["trace[132155171] 'agreement among raft nodes before linearized reading'  (duration: 7.561216649s)"],"step_count":1}
	{"level":"error","ts":"2024-08-15T01:05:09.076131Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: leader changed\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-15T01:05:09.076343Z","caller":"traceutil/trace.go:171","msg":"trace[1825368976] range","detail":"{range_begin:/registry/leases/kube-system/apiserver-wlkbtrcoop7orveomrjermzasy; range_end:; }","duration":"7.342381566s","start":"2024-08-15T01:05:01.733944Z","end":"2024-08-15T01:05:09.076326Z","steps":["trace[1825368976] 'agreement among raft nodes before linearized reading'  (duration: 7.332833573s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T01:05:09.076375Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T01:05:01.733901Z","time spent":"7.342464922s","remote":"127.0.0.1:48654","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/leases/kube-system/apiserver-wlkbtrcoop7orveomrjermzasy\" "}
	{"level":"info","ts":"2024-08-15T01:05:09.101855Z","caller":"etcdserver/v3_server.go:912","msg":"first commit in current term: resending ReadIndex request"}
	{"level":"warn","ts":"2024-08-15T01:05:09.112412Z","caller":"etcdserver/v3_server.go:897","msg":"ignored out-of-date read index response; local node read indexes queueing up and waiting to be in sync with leader","sent-request-id":8128031215594814216,"received-request-id":8128031215594814215}
	{"level":"info","ts":"2024-08-15T01:05:29.002952Z","caller":"traceutil/trace.go:171","msg":"trace[1381514359] transaction","detail":"{read_only:false; response_revision:2958; number_of_response:1; }","duration":"100.039103ms","start":"2024-08-15T01:05:28.902897Z","end":"2024-08-15T01:05:29.002936Z","steps":["trace[1381514359] 'process raft request'  (duration: 99.885488ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:06:36 up  9:48,  0 users,  load average: 1.91, 2.11, 2.03
	Linux ha-095774 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [04feafb9fa0f280e2202aceed2192ebd835470efadebfecef79bb219f28aaf7d] <==
	I0815 01:06:09.847663       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0815 01:06:09.847671       1 main.go:322] Node ha-095774-m04 has CIDR [10.244.3.0/24] 
	I0815 01:06:09.847709       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.49.5 Flags: [] Table: 0} 
	W0815 01:06:17.032515       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0815 01:06:17.032630       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0815 01:06:17.618613       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0815 01:06:17.618648       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0815 01:06:19.840017       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 01:06:19.840054       1 main.go:299] handling current node
	I0815 01:06:19.840069       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0815 01:06:19.840076       1 main.go:322] Node ha-095774-m02 has CIDR [10.244.1.0/24] 
	I0815 01:06:19.840207       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0815 01:06:19.840222       1 main.go:322] Node ha-095774-m04 has CIDR [10.244.3.0/24] 
	W0815 01:06:20.297849       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 01:06:20.297972       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0815 01:06:29.840470       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0815 01:06:29.840506       1 main.go:322] Node ha-095774-m04 has CIDR [10.244.3.0/24] 
	I0815 01:06:29.840643       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 01:06:29.840657       1 main.go:299] handling current node
	I0815 01:06:29.840669       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0815 01:06:29.840674       1 main.go:322] Node ha-095774-m02 has CIDR [10.244.1.0/24] 
	W0815 01:06:33.131369       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0815 01:06:33.131409       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0815 01:06:36.382834       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0815 01:06:36.382874       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	
	
	==> kube-apiserver [07599592774c79ce003470743bbb2855bb171cef5d6a78ee2724759055ed5a52] <==
	I0815 01:05:52.411734       1 establishing_controller.go:81] Starting EstablishingController
	I0815 01:05:52.411809       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0815 01:05:52.411873       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0815 01:05:52.411932       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0815 01:05:53.006633       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0815 01:05:53.006912       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0815 01:05:53.006975       1 shared_informer.go:320] Caches are synced for configmaps
	I0815 01:05:53.007077       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0815 01:05:53.008033       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0815 01:05:53.010636       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0815 01:05:53.010803       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0815 01:05:53.010821       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0815 01:05:53.011345       1 aggregator.go:171] initial CRD sync complete...
	I0815 01:05:53.011368       1 autoregister_controller.go:144] Starting autoregister controller
	I0815 01:05:53.011376       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0815 01:05:53.011382       1 cache.go:39] Caches are synced for autoregister controller
	I0815 01:05:53.015702       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0815 01:05:53.020285       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 01:05:53.020385       1 policy_source.go:224] refreshing policies
	I0815 01:05:53.020489       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0815 01:05:53.067652       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 01:05:53.444631       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0815 01:05:54.059148       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I0815 01:05:54.060856       1 controller.go:615] quota admission added evaluator for: endpoints
	I0815 01:05:54.071572       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [833a092eebd5c3a5c47eca2b08634641ac4ff5657ae0afa1cd3a12284c4178c9] <==
	E0815 01:05:09.112201       1 cacher.go:478] cacher (jobs.batch): unexpected ListAndWatch error: failed to list *batch.Job: etcdserver: leader changed; reinitializing...
	W0815 01:05:09.114665       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PriorityLevelConfiguration: etcdserver: leader changed
	E0815 01:05:09.114710       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PriorityLevelConfiguration: failed to list *v1.PriorityLevelConfiguration: etcdserver: leader changed" logger="UnhandledError"
	W0815 01:05:09.114746       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: etcdserver: leader changed
	E0815 01:05:09.114758       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: etcdserver: leader changed" logger="UnhandledError"
	I0815 01:05:09.446695       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0815 01:05:10.840283       1 shared_informer.go:320] Caches are synced for configmaps
	I0815 01:05:11.138521       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0815 01:05:11.140368       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0815 01:05:11.152431       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0815 01:05:11.438682       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0815 01:05:11.551902       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 01:05:11.551938       1 policy_source.go:224] refreshing policies
	I0815 01:05:11.824019       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 01:05:12.138358       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0815 01:05:12.138406       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0815 01:05:12.220594       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0815 01:05:12.240002       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E0815 01:05:12.250588       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0815 01:05:12.340933       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0815 01:05:12.341161       1 aggregator.go:171] initial CRD sync complete...
	I0815 01:05:12.341201       1 autoregister_controller.go:144] Starting autoregister controller
	I0815 01:05:12.341232       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0815 01:05:12.341264       1 cache.go:39] Caches are synced for autoregister controller
	F0815 01:05:49.438926       1 hooks.go:210] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-controller-manager [b81bd8b52d79f3695e2258834385fe9692ce10a46026d798e51b845d005afb96] <==
	I0815 01:05:30.647053       1 serving.go:386] Generated self-signed cert in-memory
	I0815 01:05:32.080093       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0815 01:05:32.080129       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 01:05:32.081626       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0815 01:05:32.081792       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0815 01:05:32.081888       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0815 01:05:32.081981       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0815 01:05:42.104585       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [f1db918713f2f9d7aa95d8348fbefde03b2e26e69b94c773d4e0aed9dae55fb0] <==
	I0815 01:06:09.501913       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-095774-m02"
	I0815 01:06:09.501942       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-095774-m04"
	I0815 01:06:09.502459       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0815 01:06:09.513664       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0815 01:06:09.593317       1 shared_informer.go:320] Caches are synced for disruption
	I0815 01:06:09.614845       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-095774-m04"
	I0815 01:06:09.614969       1 shared_informer.go:320] Caches are synced for resource quota
	I0815 01:06:09.623121       1 shared_informer.go:320] Caches are synced for resource quota
	I0815 01:06:10.043336       1 shared_informer.go:320] Caches are synced for garbage collector
	I0815 01:06:10.043478       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0815 01:06:10.065749       1 shared_informer.go:320] Caches are synced for garbage collector
	I0815 01:06:15.123726       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-095774-m04"
	I0815 01:06:15.123781       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-095774-m04"
	I0815 01:06:15.163268       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-095774-m04"
	I0815 01:06:19.519337       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-095774-m04"
	I0815 01:06:24.256656       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="69.694µs"
	I0815 01:06:25.493589       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="118.318843ms"
	I0815 01:06:25.493782       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="60.734µs"
	I0815 01:06:29.915207       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-095774"
	I0815 01:06:29.915462       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-095774-m04"
	I0815 01:06:29.942299       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-095774"
	I0815 01:06:29.957456       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="9.241948ms"
	I0815 01:06:29.957784       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="44.635µs"
	I0815 01:06:34.640995       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-095774"
	I0815 01:06:35.284913       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-095774"
	
	
	==> kube-proxy [2035dcff26d183d530d12d58b892c021cd8fc7600b096add5ed174420e15072f] <==
	I0815 01:05:29.682356       1 server_linux.go:66] "Using iptables proxy"
	I0815 01:05:30.091076       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0815 01:05:30.091316       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 01:05:30.138335       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0815 01:05:30.138604       1 server_linux.go:169] "Using iptables Proxier"
	I0815 01:05:30.141707       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 01:05:30.142342       1 server.go:483] "Version info" version="v1.31.0"
	I0815 01:05:30.142884       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 01:05:30.144786       1 config.go:197] "Starting service config controller"
	I0815 01:05:30.144915       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 01:05:30.144975       1 config.go:104] "Starting endpoint slice config controller"
	I0815 01:05:30.145005       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 01:05:30.149366       1 config.go:326] "Starting node config controller"
	I0815 01:05:30.149474       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 01:05:30.246171       1 shared_informer.go:320] Caches are synced for service config
	I0815 01:05:30.246322       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 01:05:30.249829       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [084fa61fa274768b70c1a7ef60b5926fea6e16211bb99b73a6ab13ce57dcd27a] <==
	E0815 01:05:10.061076       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 01:05:10.187303       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0815 01:05:10.187358       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 01:05:10.237149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0815 01:05:10.237199       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 01:05:10.400757       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0815 01:05:10.400804       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 01:05:10.591604       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 01:05:10.591721       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0815 01:05:30.848978       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0815 01:05:52.883165       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.2:43532->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0815 01:05:52.918646       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.168.49.2:43562->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0815 01:05:52.918795       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes) - error from a previous attempt: read tcp 192.168.49.2:43602->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0815 01:05:52.918869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:43590->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0815 01:05:52.919775       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces) - error from a previous attempt: read tcp 192.168.49.2:43670->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0815 01:05:52.919874       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps) - error from a previous attempt: read tcp 192.168.49.2:43640->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0815 01:05:52.919957       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:43666->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0815 01:05:52.920340       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes) - error from a previous attempt: read tcp 192.168.49.2:43626->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0815 01:05:52.920468       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy) - error from a previous attempt: read tcp 192.168.49.2:43662->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0815 01:05:52.921333       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:43582->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0815 01:05:52.921482       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims) - error from a previous attempt: read tcp 192.168.49.2:43610->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0815 01:05:52.921632       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps) - error from a previous attempt: read tcp 192.168.49.2:43566->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0815 01:05:52.921759       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers) - error from a previous attempt: read tcp 192.168.49.2:43650->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0815 01:05:52.921861       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:43546->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0815 01:05:52.923556       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods) - error from a previous attempt: read tcp 192.168.49.2:43668->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	
	
	==> kubelet <==
	Aug 15 01:05:47 ha-095774 kubelet[754]: E0815 01:05:47.575929     754 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683947575682547,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:05:49 ha-095774 kubelet[754]: I0815 01:05:49.844424     754 scope.go:117] "RemoveContainer" containerID="833a092eebd5c3a5c47eca2b08634641ac4ff5657ae0afa1cd3a12284c4178c9"
	Aug 15 01:05:49 ha-095774 kubelet[754]: I0815 01:05:49.845016     754 status_manager.go:851] "Failed to get status for pod" podUID="d3fbf0862791ece0d2d549de931d417a" pod="kube-system/kube-apiserver-ha-095774" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-095774\": dial tcp 192.168.49.254:8443: connect: connection refused"
	Aug 15 01:05:49 ha-095774 kubelet[754]: E0815 01:05:49.846299     754 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-095774.17ebc15c27c8b575\": dial tcp 192.168.49.254:8443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-ha-095774.17ebc15c27c8b575  kube-system   2909 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-095774,UID:d3fbf0862791ece0d2d549de931d417a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"registry.k8s.io/kube-apiserver:v1.31.0\" already present on machine,Source:EventSource{Component:kubelet,Host:ha-095774,},FirstTimestamp:2024-08-15 01:04:43 +0000 UTC,LastTimestamp:2024-08-15 01:05:49.845592121 +0000 UTC m=+72.476263310,Count:2,Type:Normal,EventTime:0001-01-01 00:00
:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-095774,}"
	Aug 15 01:05:52 ha-095774 kubelet[754]: I0815 01:05:52.805683     754 scope.go:117] "RemoveContainer" containerID="b81bd8b52d79f3695e2258834385fe9692ce10a46026d798e51b845d005afb96"
	Aug 15 01:05:52 ha-095774 kubelet[754]: E0815 01:05:52.805894     754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-095774_kube-system(c119424530d9d7aed366002355bd1183)\"" pod="kube-system/kube-controller-manager-ha-095774" podUID="c119424530d9d7aed366002355bd1183"
	Aug 15 01:05:52 ha-095774 kubelet[754]: E0815 01:05:52.808222     754 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:52004->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	Aug 15 01:05:52 ha-095774 kubelet[754]: E0815 01:05:52.809727     754 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:51986->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	Aug 15 01:05:52 ha-095774 kubelet[754]: E0815 01:05:52.810137     754 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:52008->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	Aug 15 01:05:52 ha-095774 kubelet[754]: E0815 01:05:52.810600     754 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:51970->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	Aug 15 01:05:53 ha-095774 kubelet[754]: I0815 01:05:53.861423     754 scope.go:117] "RemoveContainer" containerID="9f293648562869d5e7b8a741aed2f1c7fca24fa2d1ae5c770da7f4180909891e"
	Aug 15 01:05:57 ha-095774 kubelet[754]: E0815 01:05:57.577509     754 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683957577060229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:05:57 ha-095774 kubelet[754]: E0815 01:05:57.577971     754 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683957577060229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:05:59 ha-095774 kubelet[754]: I0815 01:05:59.876347     754 scope.go:117] "RemoveContainer" containerID="5ca0bb9d4caa28b6b41f0fb9c07543ac0da09fd0a39fa756dd6825c93b544ea8"
	Aug 15 01:06:04 ha-095774 kubelet[754]: I0815 01:06:04.678933     754 scope.go:117] "RemoveContainer" containerID="b81bd8b52d79f3695e2258834385fe9692ce10a46026d798e51b845d005afb96"
	Aug 15 01:06:06 ha-095774 kubelet[754]: E0815 01:06:06.078009     754 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-095774?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Aug 15 01:06:07 ha-095774 kubelet[754]: E0815 01:06:07.579780     754 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683967579396397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:06:07 ha-095774 kubelet[754]: E0815 01:06:07.579816     754 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683967579396397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:06:16 ha-095774 kubelet[754]: E0815 01:06:16.079104     754 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-095774?timeout=10s\": context deadline exceeded"
	Aug 15 01:06:17 ha-095774 kubelet[754]: E0815 01:06:17.581401     754 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683977581160549,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:06:17 ha-095774 kubelet[754]: E0815 01:06:17.581437     754 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683977581160549,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:06:26 ha-095774 kubelet[754]: E0815 01:06:26.079572     754 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-095774?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Aug 15 01:06:27 ha-095774 kubelet[754]: E0815 01:06:27.583402     754 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683987583180674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:06:27 ha-095774 kubelet[754]: E0815 01:06:27.583442     754 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683987583180674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:06:36 ha-095774 kubelet[754]: E0815 01:06:36.080195     754 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-095774?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-095774 -n ha-095774
helpers_test.go:261: (dbg) Run:  kubectl --context ha-095774 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (129.04s)

                                                
                                    

Test pass (295/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.04
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.0/json-events 8.15
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.07
18 TestDownloadOnly/v1.31.0/DeleteAll 0.21
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 221.7
31 TestAddons/serial/GCPAuth/Namespaces 0.21
33 TestAddons/parallel/Registry 17.22
35 TestAddons/parallel/InspektorGadget 11.81
39 TestAddons/parallel/CSI 58.79
40 TestAddons/parallel/Headlamp 12.35
41 TestAddons/parallel/CloudSpanner 6.01
42 TestAddons/parallel/LocalPath 10.52
43 TestAddons/parallel/NvidiaDevicePlugin 6.54
44 TestAddons/parallel/Yakd 11.79
45 TestAddons/StoppedEnableDisable 12.18
46 TestCertOptions 38.79
47 TestCertExpiration 250.6
49 TestForceSystemdFlag 44.07
50 TestForceSystemdEnv 36.11
56 TestErrorSpam/setup 31.06
57 TestErrorSpam/start 0.84
58 TestErrorSpam/status 1.1
59 TestErrorSpam/pause 1.99
60 TestErrorSpam/unpause 1.77
61 TestErrorSpam/stop 1.45
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 52.99
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 27.32
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.1
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.67
73 TestFunctional/serial/CacheCmd/cache/add_local 1.45
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.2
78 TestFunctional/serial/CacheCmd/cache/delete 0.11
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/ExtraConfig 33.53
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.74
84 TestFunctional/serial/LogsFileCmd 2.09
85 TestFunctional/serial/InvalidService 4.28
87 TestFunctional/parallel/ConfigCmd 0.47
88 TestFunctional/parallel/DashboardCmd 11.35
89 TestFunctional/parallel/DryRun 0.55
90 TestFunctional/parallel/InternationalLanguage 0.19
91 TestFunctional/parallel/StatusCmd 1.23
95 TestFunctional/parallel/ServiceCmdConnect 10.62
96 TestFunctional/parallel/AddonsCmd 0.21
97 TestFunctional/parallel/PersistentVolumeClaim 23.8
99 TestFunctional/parallel/SSHCmd 0.73
100 TestFunctional/parallel/CpCmd 2.36
102 TestFunctional/parallel/FileSync 0.3
103 TestFunctional/parallel/CertSync 2.12
107 TestFunctional/parallel/NodeLabels 0.12
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.84
111 TestFunctional/parallel/License 0.38
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.58
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.48
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 8.24
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
125 TestFunctional/parallel/ProfileCmd/profile_list 0.4
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
127 TestFunctional/parallel/MountCmd/any-port 6.97
128 TestFunctional/parallel/ServiceCmd/List 0.52
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
131 TestFunctional/parallel/ServiceCmd/Format 0.51
132 TestFunctional/parallel/ServiceCmd/URL 0.51
133 TestFunctional/parallel/MountCmd/specific-port 2.34
134 TestFunctional/parallel/MountCmd/VerifyCleanup 2.61
135 TestFunctional/parallel/Version/short 0.06
136 TestFunctional/parallel/Version/components 1.1
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
141 TestFunctional/parallel/ImageCommands/ImageBuild 2.95
142 TestFunctional/parallel/ImageCommands/Setup 0.83
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.52
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.2
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.58
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.72
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.95
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.7
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 179.51
160 TestMultiControlPlane/serial/DeployApp 7.34
161 TestMultiControlPlane/serial/PingHostFromPods 1.63
162 TestMultiControlPlane/serial/AddWorkerNode 38.03
163 TestMultiControlPlane/serial/NodeLabels 0.11
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.75
165 TestMultiControlPlane/serial/CopyFile 19.07
166 TestMultiControlPlane/serial/StopSecondaryNode 12.76
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.57
168 TestMultiControlPlane/serial/RestartSecondaryNode 20.96
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 16.58
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 281.62
171 TestMultiControlPlane/serial/DeleteSecondaryNode 12.67
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.54
173 TestMultiControlPlane/serial/StopCluster 35.86
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.53
176 TestMultiControlPlane/serial/AddSecondaryNode 74.15
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.81
181 TestJSONOutput/start/Command 48.5
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.75
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.67
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.87
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.21
206 TestKicCustomNetwork/create_custom_network 41.01
207 TestKicCustomNetwork/use_default_bridge_network 34.14
208 TestKicExistingNetwork 34.5
209 TestKicCustomSubnet 34.03
210 TestKicStaticIP 35.03
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 70.57
215 TestMountStart/serial/StartWithMountFirst 6.64
216 TestMountStart/serial/VerifyMountFirst 0.27
217 TestMountStart/serial/StartWithMountSecond 9.47
218 TestMountStart/serial/VerifyMountSecond 0.27
219 TestMountStart/serial/DeleteFirst 1.62
220 TestMountStart/serial/VerifyMountPostDelete 0.26
221 TestMountStart/serial/Stop 1.2
222 TestMountStart/serial/RestartStopped 8.11
223 TestMountStart/serial/VerifyMountPostStop 0.27
226 TestMultiNode/serial/FreshStart2Nodes 77.74
227 TestMultiNode/serial/DeployApp2Nodes 4.8
228 TestMultiNode/serial/PingHostFrom2Pods 0.99
229 TestMultiNode/serial/AddNode 31.72
230 TestMultiNode/serial/MultiNodeLabels 0.1
231 TestMultiNode/serial/ProfileList 0.34
232 TestMultiNode/serial/CopyFile 10.07
233 TestMultiNode/serial/StopNode 2.24
234 TestMultiNode/serial/StartAfterStop 10.53
235 TestMultiNode/serial/RestartKeepsNodes 80.42
236 TestMultiNode/serial/DeleteNode 5.3
237 TestMultiNode/serial/StopMultiNode 23.91
238 TestMultiNode/serial/RestartMultiNode 56.87
239 TestMultiNode/serial/ValidateNameConflict 33.66
244 TestPreload 130.32
246 TestScheduledStopUnix 107
249 TestInsufficientStorage 10.59
250 TestRunningBinaryUpgrade 74.15
252 TestKubernetesUpgrade 382.04
253 TestMissingContainerUpgrade 172.63
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
256 TestNoKubernetes/serial/StartWithK8s 39.68
257 TestNoKubernetes/serial/StartWithStopK8s 10.56
258 TestNoKubernetes/serial/Start 6.88
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
260 TestNoKubernetes/serial/ProfileList 1.14
261 TestNoKubernetes/serial/Stop 1.27
262 TestNoKubernetes/serial/StartNoArgs 7.81
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.37
264 TestStoppedBinaryUpgrade/Setup 1.47
265 TestStoppedBinaryUpgrade/Upgrade 99.94
266 TestStoppedBinaryUpgrade/MinikubeLogs 1.32
275 TestPause/serial/Start 52.09
276 TestPause/serial/SecondStartNoReconfiguration 26.52
277 TestPause/serial/Pause 0.76
278 TestPause/serial/VerifyStatus 0.34
279 TestPause/serial/Unpause 0.71
280 TestPause/serial/PauseAgain 0.94
281 TestPause/serial/DeletePaused 2.84
282 TestPause/serial/VerifyDeletedResources 0.48
290 TestNetworkPlugins/group/false 4.54
295 TestStartStop/group/old-k8s-version/serial/FirstStart 149.59
296 TestStartStop/group/old-k8s-version/serial/DeployApp 8.67
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.15
298 TestStartStop/group/old-k8s-version/serial/Stop 12.12
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
300 TestStartStop/group/old-k8s-version/serial/SecondStart 152.18
302 TestStartStop/group/no-preload/serial/FirstStart 69.24
303 TestStartStop/group/no-preload/serial/DeployApp 9.37
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.19
305 TestStartStop/group/no-preload/serial/Stop 11.99
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
307 TestStartStop/group/no-preload/serial/SecondStart 267.26
308 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
309 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
310 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
311 TestStartStop/group/old-k8s-version/serial/Pause 3.02
313 TestStartStop/group/embed-certs/serial/FirstStart 49.16
314 TestStartStop/group/embed-certs/serial/DeployApp 9.36
315 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.14
316 TestStartStop/group/embed-certs/serial/Stop 11.97
317 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
318 TestStartStop/group/embed-certs/serial/SecondStart 301.44
319 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
320 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
321 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
322 TestStartStop/group/no-preload/serial/Pause 3.41
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 51.5
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.35
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.15
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.03
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 303.62
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
333 TestStartStop/group/embed-certs/serial/Pause 3.18
335 TestStartStop/group/newest-cni/serial/FirstStart 36
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.32
338 TestStartStop/group/newest-cni/serial/Stop 1.26
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
340 TestStartStop/group/newest-cni/serial/SecondStart 15.68
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.39
344 TestStartStop/group/newest-cni/serial/Pause 3.16
345 TestNetworkPlugins/group/auto/Start 52.52
346 TestNetworkPlugins/group/auto/KubeletFlags 0.3
347 TestNetworkPlugins/group/auto/NetCatPod 12.28
348 TestNetworkPlugins/group/auto/DNS 0.18
349 TestNetworkPlugins/group/auto/Localhost 0.16
350 TestNetworkPlugins/group/auto/HairPin 0.2
351 TestNetworkPlugins/group/kindnet/Start 52.58
352 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
353 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
354 TestNetworkPlugins/group/kindnet/NetCatPod 11.26
355 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
356 TestNetworkPlugins/group/kindnet/DNS 0.19
357 TestNetworkPlugins/group/kindnet/Localhost 0.19
358 TestNetworkPlugins/group/kindnet/HairPin 0.16
359 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
360 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.34
361 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.31
362 TestNetworkPlugins/group/calico/Start 73.98
363 TestNetworkPlugins/group/custom-flannel/Start 59.53
364 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.41
365 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.41
366 TestNetworkPlugins/group/calico/ControllerPod 6.01
367 TestNetworkPlugins/group/custom-flannel/DNS 0.2
368 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
369 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
370 TestNetworkPlugins/group/calico/KubeletFlags 0.34
371 TestNetworkPlugins/group/calico/NetCatPod 11.29
372 TestNetworkPlugins/group/calico/DNS 0.29
373 TestNetworkPlugins/group/calico/Localhost 0.29
374 TestNetworkPlugins/group/calico/HairPin 0.21
375 TestNetworkPlugins/group/enable-default-cni/Start 78.18
376 TestNetworkPlugins/group/flannel/Start 61.95
377 TestNetworkPlugins/group/flannel/ControllerPod 6.01
378 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
379 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.29
380 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
381 TestNetworkPlugins/group/flannel/NetCatPod 11.29
382 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
383 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
384 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
385 TestNetworkPlugins/group/flannel/DNS 0.19
386 TestNetworkPlugins/group/flannel/Localhost 0.19
387 TestNetworkPlugins/group/flannel/HairPin 0.16
388 TestNetworkPlugins/group/bridge/Start 72.84
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
390 TestNetworkPlugins/group/bridge/NetCatPod 11.27
391 TestNetworkPlugins/group/bridge/DNS 0.18
392 TestNetworkPlugins/group/bridge/Localhost 0.15
393 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (9.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-403161 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-403161 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.037823708s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-403161
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-403161: exit status 85 (67.782007ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-403161 | jenkins | v1.33.1 | 15 Aug 24 00:38 UTC |          |
	|         | -p download-only-403161        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 00:38:47
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 00:38:47.191550 1404303 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:38:47.191756 1404303 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:38:47.191787 1404303 out.go:304] Setting ErrFile to fd 2...
	I0815 00:38:47.191807 1404303 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:38:47.192068 1404303 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-1398913/.minikube/bin
	W0815 00:38:47.192238 1404303 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19443-1398913/.minikube/config/config.json: open /home/jenkins/minikube-integration/19443-1398913/.minikube/config/config.json: no such file or directory
	I0815 00:38:47.192686 1404303 out.go:298] Setting JSON to true
	I0815 00:38:47.193572 1404303 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":33670,"bootTime":1723648658,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0815 00:38:47.193689 1404303 start.go:139] virtualization:  
	I0815 00:38:47.196496 1404303 out.go:97] [download-only-403161] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0815 00:38:47.196672 1404303 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19443-1398913/.minikube/cache/preloaded-tarball: no such file or directory
	I0815 00:38:47.196718 1404303 notify.go:220] Checking for updates...
	I0815 00:38:47.199109 1404303 out.go:169] MINIKUBE_LOCATION=19443
	I0815 00:38:47.200973 1404303 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 00:38:47.202986 1404303 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19443-1398913/kubeconfig
	I0815 00:38:47.204513 1404303 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-1398913/.minikube
	I0815 00:38:47.206154 1404303 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0815 00:38:47.209577 1404303 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0815 00:38:47.209857 1404303 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 00:38:47.233369 1404303 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 00:38:47.233484 1404303 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:38:47.297073 1404303 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-15 00:38:47.286719861 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214908928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 00:38:47.297214 1404303 docker.go:307] overlay module found
	I0815 00:38:47.299079 1404303 out.go:97] Using the docker driver based on user configuration
	I0815 00:38:47.299111 1404303 start.go:297] selected driver: docker
	I0815 00:38:47.299118 1404303 start.go:901] validating driver "docker" against <nil>
	I0815 00:38:47.299227 1404303 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:38:47.351270 1404303 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-15 00:38:47.342332153 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214908928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 00:38:47.351435 1404303 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 00:38:47.351733 1404303 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0815 00:38:47.351934 1404303 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 00:38:47.353937 1404303 out.go:169] Using Docker driver with root privileges
	I0815 00:38:47.355669 1404303 cni.go:84] Creating CNI manager for ""
	I0815 00:38:47.355698 1404303 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0815 00:38:47.355710 1404303 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0815 00:38:47.355798 1404303 start.go:340] cluster config:
	{Name:download-only-403161 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-403161 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:38:47.357540 1404303 out.go:97] Starting "download-only-403161" primary control-plane node in "download-only-403161" cluster
	I0815 00:38:47.357580 1404303 cache.go:121] Beginning downloading kic base image for docker with crio
	I0815 00:38:47.359211 1404303 out.go:97] Pulling base image v0.0.44-1723650208-19443 ...
	I0815 00:38:47.359248 1404303 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 00:38:47.359346 1404303 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	I0815 00:38:47.374716 1404303 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 00:38:47.375461 1404303 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0815 00:38:47.375564 1404303 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 00:38:47.450420 1404303 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0815 00:38:47.450446 1404303 cache.go:56] Caching tarball of preloaded images
	I0815 00:38:47.451171 1404303 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 00:38:47.453531 1404303 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0815 00:38:47.453580 1404303 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0815 00:38:47.568682 1404303 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19443-1398913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-403161 host does not exist
	  To start a cluster, run: "minikube start -p download-only-403161"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-403161
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (8.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-660423 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-660423 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.154655613s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (8.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-660423
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-660423: exit status 85 (72.871273ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-403161 | jenkins | v1.33.1 | 15 Aug 24 00:38 UTC |                     |
	|         | -p download-only-403161        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 15 Aug 24 00:38 UTC | 15 Aug 24 00:38 UTC |
	| delete  | -p download-only-403161        | download-only-403161 | jenkins | v1.33.1 | 15 Aug 24 00:38 UTC | 15 Aug 24 00:38 UTC |
	| start   | -o=json --download-only        | download-only-660423 | jenkins | v1.33.1 | 15 Aug 24 00:38 UTC |                     |
	|         | -p download-only-660423        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 00:38:56
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 00:38:56.630961 1404507 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:38:56.631151 1404507 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:38:56.631162 1404507 out.go:304] Setting ErrFile to fd 2...
	I0815 00:38:56.631167 1404507 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:38:56.631416 1404507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-1398913/.minikube/bin
	I0815 00:38:56.631835 1404507 out.go:298] Setting JSON to true
	I0815 00:38:56.632711 1404507 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":33679,"bootTime":1723648658,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0815 00:38:56.632792 1404507 start.go:139] virtualization:  
	I0815 00:38:56.635361 1404507 out.go:97] [download-only-660423] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0815 00:38:56.635587 1404507 notify.go:220] Checking for updates...
	I0815 00:38:56.637230 1404507 out.go:169] MINIKUBE_LOCATION=19443
	I0815 00:38:56.638917 1404507 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 00:38:56.640631 1404507 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19443-1398913/kubeconfig
	I0815 00:38:56.642710 1404507 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-1398913/.minikube
	I0815 00:38:56.644751 1404507 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0815 00:38:56.648173 1404507 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0815 00:38:56.648455 1404507 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 00:38:56.668907 1404507 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 00:38:56.669032 1404507 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:38:56.742018 1404507 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-15 00:38:56.732834374 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214908928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 00:38:56.742123 1404507 docker.go:307] overlay module found
	I0815 00:38:56.744046 1404507 out.go:97] Using the docker driver based on user configuration
	I0815 00:38:56.744071 1404507 start.go:297] selected driver: docker
	I0815 00:38:56.744079 1404507 start.go:901] validating driver "docker" against <nil>
	I0815 00:38:56.744192 1404507 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:38:56.805830 1404507 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-15 00:38:56.796728877 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214908928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 00:38:56.806010 1404507 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 00:38:56.806314 1404507 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0815 00:38:56.806507 1404507 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 00:38:56.808499 1404507 out.go:169] Using Docker driver with root privileges
	I0815 00:38:56.810092 1404507 cni.go:84] Creating CNI manager for ""
	I0815 00:38:56.810124 1404507 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0815 00:38:56.810136 1404507 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0815 00:38:56.810228 1404507 start.go:340] cluster config:
	{Name:download-only-660423 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-660423 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:38:56.812269 1404507 out.go:97] Starting "download-only-660423" primary control-plane node in "download-only-660423" cluster
	I0815 00:38:56.812297 1404507 cache.go:121] Beginning downloading kic base image for docker with crio
	I0815 00:38:56.814113 1404507 out.go:97] Pulling base image v0.0.44-1723650208-19443 ...
	I0815 00:38:56.814136 1404507 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:38:56.814293 1404507 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	I0815 00:38:56.829756 1404507 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 00:38:56.829916 1404507 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0815 00:38:56.829939 1404507 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory, skipping pull
	I0815 00:38:56.829948 1404507 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 exists in cache, skipping pull
	I0815 00:38:56.829956 1404507 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 as a tarball
	I0815 00:38:56.884922 1404507 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	I0815 00:38:56.884955 1404507 cache.go:56] Caching tarball of preloaded images
	I0815 00:38:56.885118 1404507 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:38:56.887294 1404507 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0815 00:38:56.887317 1404507 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 ...
	I0815 00:38:57.008170 1404507 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e6af375765e1700a37be5f07489fb80e -> /home/jenkins/minikube-integration/19443-1398913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-660423 host does not exist
	  To start a cluster, run: "minikube start -p download-only-660423"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-660423
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-338566 --alsologtostderr --binary-mirror http://127.0.0.1:37403 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-338566" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-338566
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-177998
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-177998: exit status 85 (77.374033ms)

                                                
                                                
-- stdout --
	* Profile "addons-177998" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-177998"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-177998
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-177998: exit status 85 (69.633989ms)

                                                
                                                
-- stdout --
	* Profile "addons-177998" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-177998"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (221.7s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-177998 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-177998 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m41.700266504s)
--- PASS: TestAddons/Setup (221.70s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-177998 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-177998 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 6.111036ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-pjk6z" [8d5b9336-317e-46bc-aca7-c582ff9a713b] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003921074s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-mhl5f" [ffcca5c8-f85a-422d-ae88-317ee7017802] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007228924s
addons_test.go:342: (dbg) Run:  kubectl --context addons-177998 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-177998 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-177998 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.147575062s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-177998 ip
2024/08/15 00:43:22 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-177998 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.22s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.81s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-9tfpn" [bb930e56-1fb8-48b0-939e-d5653bc8c277] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004196139s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-177998
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-177998: (5.799765239s)
--- PASS: TestAddons/parallel/InspektorGadget (11.81s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.79s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 13.980984ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-177998 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-177998 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e510c7d6-6569-4212-a765-6c829e7aa2bb] Pending
helpers_test.go:344: "task-pv-pod" [e510c7d6-6569-4212-a765-6c829e7aa2bb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e510c7d6-6569-4212-a765-6c829e7aa2bb] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003270482s
addons_test.go:590: (dbg) Run:  kubectl --context addons-177998 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-177998 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-177998 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-177998 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-177998 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-177998 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-177998 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [27b2bbf6-4b56-45ce-b731-61d802ffeff5] Pending
helpers_test.go:344: "task-pv-pod-restore" [27b2bbf6-4b56-45ce-b731-61d802ffeff5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [27b2bbf6-4b56-45ce-b731-61d802ffeff5] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004511813s
addons_test.go:632: (dbg) Run:  kubectl --context addons-177998 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-177998 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-177998 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-177998 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-177998 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.947042959s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-177998 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-arm64 -p addons-177998 addons disable volumesnapshots --alsologtostderr -v=1: (1.037687545s)
--- PASS: TestAddons/parallel/CSI (58.79s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-177998 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-177998 --alsologtostderr -v=1: (1.068675082s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-zzp8q" [b5acc6aa-6493-4164-a640-1c606cd85804] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-zzp8q" [b5acc6aa-6493-4164-a640-1c606cd85804] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-zzp8q" [b5acc6aa-6493-4164-a640-1c606cd85804] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.00415174s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-177998 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (12.35s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-rklnt" [1a62ae85-5e49-4c25-bf50-13bf28007497] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.012359425s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-177998
--- PASS: TestAddons/parallel/CloudSpanner (6.01s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.52s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-177998 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-177998 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-177998 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7f7c64f9-37cd-41b5-929d-1b8ea4897c9b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7f7c64f9-37cd-41b5-929d-1b8ea4897c9b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7f7c64f9-37cd-41b5-929d-1b8ea4897c9b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003454465s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-177998 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-177998 ssh "cat /opt/local-path-provisioner/pvc-2ebb18e5-943e-4735-a7ec-2a8e78491a99_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-177998 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-177998 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-177998 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.52s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-7b7wb" [83483a1f-e9b5-416a-922d-45fe573a70cc] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003926589s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-177998
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-tljnt" [e68287fe-c90c-4e58-98b9-ce6b9dc7afe1] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004162907s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-177998 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-177998 addons disable yakd --alsologtostderr -v=1: (5.787897253s)
--- PASS: TestAddons/parallel/Yakd (11.79s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.18s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-177998
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-177998: (11.89835691s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-177998
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-177998
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-177998
--- PASS: TestAddons/StoppedEnableDisable (12.18s)

                                                
                                    
x
+
TestCertOptions (38.79s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-000724 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-000724 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (36.013208611s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-000724 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-000724 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-000724 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-000724" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-000724
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-000724: (2.076969458s)
--- PASS: TestCertOptions (38.79s)

                                                
                                    
x
+
TestCertExpiration (250.6s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-234599 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-234599 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (40.021519235s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-234599 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-234599 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (27.869716119s)
helpers_test.go:175: Cleaning up "cert-expiration-234599" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-234599
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-234599: (2.711711879s)
--- PASS: TestCertExpiration (250.60s)

                                                
                                    
x
+
TestForceSystemdFlag (44.07s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-224943 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-224943 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.219322901s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-224943 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-224943" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-224943
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-224943: (2.510298324s)
--- PASS: TestForceSystemdFlag (44.07s)

                                                
                                    
x
+
TestForceSystemdEnv (36.11s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-287473 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-287473 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.55854547s)
helpers_test.go:175: Cleaning up "force-systemd-env-287473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-287473
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-287473: (2.548919864s)
--- PASS: TestForceSystemdEnv (36.11s)

                                                
                                    
x
+
TestErrorSpam/setup (31.06s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-270210 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-270210 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-270210 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-270210 --driver=docker  --container-runtime=crio: (31.056424585s)
--- PASS: TestErrorSpam/setup (31.06s)

                                                
                                    
x
+
TestErrorSpam/start (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-270210 --log_dir /tmp/nospam-270210 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-270210 --log_dir /tmp/nospam-270210 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-270210 --log_dir /tmp/nospam-270210 start --dry-run
--- PASS: TestErrorSpam/start (0.84s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-270210 --log_dir /tmp/nospam-270210 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-270210 --log_dir /tmp/nospam-270210 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-270210 --log_dir /tmp/nospam-270210 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (1.99s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-270210 --log_dir /tmp/nospam-270210 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-270210 --log_dir /tmp/nospam-270210 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-270210 --log_dir /tmp/nospam-270210 pause
--- PASS: TestErrorSpam/pause (1.99s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.77s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-270210 --log_dir /tmp/nospam-270210 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-270210 --log_dir /tmp/nospam-270210 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-270210 --log_dir /tmp/nospam-270210 unpause
--- PASS: TestErrorSpam/unpause (1.77s)

                                                
                                    
x
+
TestErrorSpam/stop (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-270210 --log_dir /tmp/nospam-270210 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-270210 --log_dir /tmp/nospam-270210 stop: (1.264249078s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-270210 --log_dir /tmp/nospam-270210 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-270210 --log_dir /tmp/nospam-270210 stop
--- PASS: TestErrorSpam/stop (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19443-1398913/.minikube/files/etc/test/nested/copy/1404298/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.99s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-675813 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-675813 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (52.985961101s)
--- PASS: TestFunctional/serial/StartWithProxy (52.99s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.32s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-675813 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-675813 --alsologtostderr -v=8: (27.317287124s)
functional_test.go:663: soft start took 27.317979156s for "functional-675813" cluster.
--- PASS: TestFunctional/serial/SoftStart (27.32s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-675813 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-675813 cache add registry.k8s.io/pause:3.1: (1.515716329s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-675813 cache add registry.k8s.io/pause:3.3: (1.690365853s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-675813 cache add registry.k8s.io/pause:latest: (1.463070375s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-675813 /tmp/TestFunctionalserialCacheCmdcacheadd_local1085844957/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 cache add minikube-local-cache-test:functional-675813
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 cache delete minikube-local-cache-test:functional-675813
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-675813
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-675813 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (312.467779ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-675813 cache reload: (1.253261894s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 kubectl -- --context functional-675813 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-675813 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.53s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-675813 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0815 00:52:48.814906 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:52:48.821728 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:52:48.833115 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:52:48.854578 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:52:48.896067 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:52:48.977665 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:52:49.139271 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:52:49.460993 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:52:50.103157 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:52:51.385073 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:52:53.946588 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-675813 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.525117306s)
functional_test.go:761: restart took 33.525260878s for "functional-675813" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.53s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-675813 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-675813 logs: (1.740937659s)
--- PASS: TestFunctional/serial/LogsCmd (1.74s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.09s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 logs --file /tmp/TestFunctionalserialLogsFileCmd630342789/001/logs.txt
E0815 00:52:59.068832 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-675813 logs --file /tmp/TestFunctionalserialLogsFileCmd630342789/001/logs.txt: (2.088942522s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.09s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.28s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-675813 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-675813
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-675813: exit status 115 (537.820448ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31110 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-675813 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.28s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-675813 config get cpus: exit status 14 (78.473208ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-675813 config get cpus: exit status 14 (72.59128ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-675813 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-675813 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1431887: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.35s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-675813 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-675813 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (241.195721ms)

                                                
                                                
-- stdout --
	* [functional-675813] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19443
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19443-1398913/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-1398913/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 00:53:40.433923 1431184 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:53:40.434142 1431184 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:53:40.434155 1431184 out.go:304] Setting ErrFile to fd 2...
	I0815 00:53:40.434161 1431184 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:53:40.434488 1431184 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-1398913/.minikube/bin
	I0815 00:53:40.434848 1431184 out.go:298] Setting JSON to false
	I0815 00:53:40.435851 1431184 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34563,"bootTime":1723648658,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0815 00:53:40.435916 1431184 start.go:139] virtualization:  
	I0815 00:53:40.439075 1431184 out.go:177] * [functional-675813] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0815 00:53:40.442513 1431184 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 00:53:40.442678 1431184 notify.go:220] Checking for updates...
	I0815 00:53:40.447815 1431184 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 00:53:40.450594 1431184 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-1398913/kubeconfig
	I0815 00:53:40.453218 1431184 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-1398913/.minikube
	I0815 00:53:40.455696 1431184 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0815 00:53:40.458256 1431184 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 00:53:40.461369 1431184 config.go:182] Loaded profile config "functional-675813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:53:40.461890 1431184 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 00:53:40.505455 1431184 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 00:53:40.505572 1431184 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:53:40.593159 1431184 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-15 00:53:40.582143276 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214908928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 00:53:40.593275 1431184 docker.go:307] overlay module found
	I0815 00:53:40.596236 1431184 out.go:177] * Using the docker driver based on existing profile
	I0815 00:53:40.598836 1431184 start.go:297] selected driver: docker
	I0815 00:53:40.598857 1431184 start.go:901] validating driver "docker" against &{Name:functional-675813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-675813 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:53:40.598967 1431184 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 00:53:40.602116 1431184 out.go:177] 
	W0815 00:53:40.604763 1431184 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0815 00:53:40.607400 1431184 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-675813 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-675813 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-675813 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (193.939363ms)

                                                
                                                
-- stdout --
	* [functional-675813] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19443
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19443-1398913/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-1398913/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 00:53:40.228754 1431135 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:53:40.228958 1431135 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:53:40.228972 1431135 out.go:304] Setting ErrFile to fd 2...
	I0815 00:53:40.228978 1431135 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:53:40.229381 1431135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-1398913/.minikube/bin
	I0815 00:53:40.229806 1431135 out.go:298] Setting JSON to false
	I0815 00:53:40.230834 1431135 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34563,"bootTime":1723648658,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0815 00:53:40.230911 1431135 start.go:139] virtualization:  
	I0815 00:53:40.234452 1431135 out.go:177] * [functional-675813] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0815 00:53:40.237945 1431135 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 00:53:40.238080 1431135 notify.go:220] Checking for updates...
	I0815 00:53:40.243236 1431135 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 00:53:40.245902 1431135 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-1398913/kubeconfig
	I0815 00:53:40.248756 1431135 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-1398913/.minikube
	I0815 00:53:40.251513 1431135 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0815 00:53:40.254066 1431135 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 00:53:40.257307 1431135 config.go:182] Loaded profile config "functional-675813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:53:40.257874 1431135 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 00:53:40.279552 1431135 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 00:53:40.279665 1431135 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:53:40.348237 1431135 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-15 00:53:40.337800534 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214908928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 00:53:40.348347 1431135 docker.go:307] overlay module found
	I0815 00:53:40.351156 1431135 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0815 00:53:40.353660 1431135 start.go:297] selected driver: docker
	I0815 00:53:40.353676 1431135 start.go:901] validating driver "docker" against &{Name:functional-675813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-675813 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:53:40.353771 1431135 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 00:53:40.357131 1431135 out.go:177] 
	W0815 00:53:40.359714 1431135 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0815 00:53:40.362346 1431135 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-675813 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-675813 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-qljtz" [8a696f0c-e709-4d5e-85b4-7679689e2a8b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-qljtz" [8a696f0c-e709-4d5e-85b4-7679689e2a8b] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004254118s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31658
functional_test.go:1675: http://192.168.49.2:31658: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-qljtz

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31658
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [e74f84c6-d4a4-44db-a3a9-9af9fb9bfabd] Running
E0815 00:53:09.310358 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004016251s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-675813 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-675813 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-675813 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-675813 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6496f4e0-1463-4971-b5f0-92b9f39a202e] Pending
helpers_test.go:344: "sp-pod" [6496f4e0-1463-4971-b5f0-92b9f39a202e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6496f4e0-1463-4971-b5f0-92b9f39a202e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.00367301s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-675813 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-675813 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-675813 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bd97391a-e75e-4f5c-8e8e-c9bd90cdc128] Pending
helpers_test.go:344: "sp-pod" [bd97391a-e75e-4f5c-8e8e-c9bd90cdc128] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004022433s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-675813 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.80s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh -n functional-675813 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 cp functional-675813:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd236050044/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh -n functional-675813 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh -n functional-675813 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1404298/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh "sudo cat /etc/test/nested/copy/1404298/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1404298.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh "sudo cat /etc/ssl/certs/1404298.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1404298.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh "sudo cat /usr/share/ca-certificates/1404298.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/14042982.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh "sudo cat /etc/ssl/certs/14042982.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/14042982.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh "sudo cat /usr/share/ca-certificates/14042982.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-675813 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-675813 ssh "sudo systemctl is-active docker": exit status 1 (382.972277ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-675813 ssh "sudo systemctl is-active containerd": exit status 1 (454.09366ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-675813 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-675813 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-675813 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-675813 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1428708: os: process already finished
helpers_test.go:502: unable to terminate pid 1428518: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-675813 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-675813 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [5de5beb0-c98c-4e90-8be8-a22bc64691ef] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [5de5beb0-c98c-4e90-8be8-a22bc64691ef] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004152188s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-675813 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.67.90 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-675813 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-675813 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-675813 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-g5sl7" [36b83c06-da6d-41c5-8c8b-d55832702d9f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0815 00:53:29.792493 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "hello-node-64b4f8f9ff-g5sl7" [36b83c06-da6d-41c5-8c8b-d55832702d9f] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003983125s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "334.095016ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "64.226435ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "346.212295ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "60.697057ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-675813 /tmp/TestFunctionalparallelMountCmdany-port3415340863/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723683212298648944" to /tmp/TestFunctionalparallelMountCmdany-port3415340863/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723683212298648944" to /tmp/TestFunctionalparallelMountCmdany-port3415340863/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723683212298648944" to /tmp/TestFunctionalparallelMountCmdany-port3415340863/001/test-1723683212298648944
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-675813 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (378.074063ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 15 00:53 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 15 00:53 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 15 00:53 test-1723683212298648944
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh cat /mount-9p/test-1723683212298648944
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-675813 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [bf2d0015-e170-4654-aaad-d59daf09a10c] Pending
helpers_test.go:344: "busybox-mount" [bf2d0015-e170-4654-aaad-d59daf09a10c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [bf2d0015-e170-4654-aaad-d59daf09a10c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [bf2d0015-e170-4654-aaad-d59daf09a10c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.00863116s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-675813 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-675813 /tmp/TestFunctionalparallelMountCmdany-port3415340863/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 service list -o json
functional_test.go:1494: Took "519.50969ms" to run "out/minikube-linux-arm64 -p functional-675813 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31098
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31098
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-675813 /tmp/TestFunctionalparallelMountCmdspecific-port1067042626/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-675813 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (460.932273ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-675813 /tmp/TestFunctionalparallelMountCmdspecific-port1067042626/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-675813 ssh "sudo umount -f /mount-9p": exit status 1 (324.810595ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-675813 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-675813 /tmp/TestFunctionalparallelMountCmdspecific-port1067042626/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-675813 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4235096450/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-675813 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4235096450/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-675813 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4235096450/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-675813 ssh "findmnt -T" /mount1: exit status 1 (842.340106ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-675813 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-675813 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4235096450/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-675813 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4235096450/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-675813 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4235096450/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.61s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-675813 version -o=json --components: (1.09745892s)
--- PASS: TestFunctional/parallel/Version/components (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-675813 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-675813
localhost/kicbase/echo-server:functional-675813
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-675813 image ls --format short --alsologtostderr:
I0815 00:53:55.652418 1433658 out.go:291] Setting OutFile to fd 1 ...
I0815 00:53:55.652798 1433658 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:53:55.652817 1433658 out.go:304] Setting ErrFile to fd 2...
I0815 00:53:55.652823 1433658 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:53:55.653238 1433658 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-1398913/.minikube/bin
I0815 00:53:55.654292 1433658 config.go:182] Loaded profile config "functional-675813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 00:53:55.654523 1433658 config.go:182] Loaded profile config "functional-675813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 00:53:55.655776 1433658 cli_runner.go:164] Run: docker container inspect functional-675813 --format={{.State.Status}}
I0815 00:53:55.673805 1433658 ssh_runner.go:195] Run: systemctl --version
I0815 00:53:55.674453 1433658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-675813
I0815 00:53:55.691396 1433658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34610 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/functional-675813/id_rsa Username:docker}
I0815 00:53:55.787477 1433658 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-675813 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 27e3830e14027 | 140MB  |
| registry.k8s.io/kube-apiserver          | v1.31.0            | cd0f0ae0ec9e0 | 92.6MB |
| docker.io/library/nginx                 | latest             | 235ff27fe7956 | 197MB  |
| localhost/minikube-local-cache-test     | functional-675813  | e8fc005310217 | 3.33kB |
| registry.k8s.io/kube-proxy              | v1.31.0            | 71d55d66fd4ee | 95.9MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 6a23fa8fd2b78 | 90.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| docker.io/library/nginx                 | alpine             | d7cd33d7d4ed1 | 46.7MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/coredns/coredns         | v1.11.1            | 2437cf7621777 | 58.8MB |
| registry.k8s.io/kube-controller-manager | v1.31.0            | fcb0683e6bdbd | 86.9MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | d5e283bc63d43 | 90.3MB |
| localhost/kicbase/echo-server           | functional-675813  | ce2d2cda2d858 | 4.79MB |
| registry.k8s.io/kube-scheduler          | v1.31.0            | fbbbd428abb4d | 67MB   |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-675813 image ls --format table --alsologtostderr:
I0815 00:53:56.471138 1433842 out.go:291] Setting OutFile to fd 1 ...
I0815 00:53:56.471267 1433842 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:53:56.471273 1433842 out.go:304] Setting ErrFile to fd 2...
I0815 00:53:56.471278 1433842 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:53:56.471557 1433842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-1398913/.minikube/bin
I0815 00:53:56.472191 1433842 config.go:182] Loaded profile config "functional-675813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 00:53:56.472301 1433842 config.go:182] Loaded profile config "functional-675813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 00:53:56.472784 1433842 cli_runner.go:164] Run: docker container inspect functional-675813 --format={{.State.Status}}
I0815 00:53:56.493970 1433842 ssh_runner.go:195] Run: systemctl --version
I0815 00:53:56.494031 1433842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-675813
I0815 00:53:56.516059 1433842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34610 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/functional-675813/id_rsa Username:docker}
I0815 00:53:56.614914 1433842 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-675813 image ls --format json --alsologtostderr:
[{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a","registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139912446"},{"id":"fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:ed8613b19e25d56d25e9ba0d83fd1e14f8ba070cb80e2674ba62ded55e260a9c","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a
142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"86930758"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:c26d1775b97b4ba3436f3cdc4d5c153b773ce2b3f5ad8e201f16b09e7182d63e"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"90290738"},{"id":"6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.i
o/kindest/kindnetd:v20240813-c6f155d6"],"size":"90295858"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"e8fc005310217d99c3594fdb9ab48e97f0cc6007e4ee283591a423bfda714613","repoDigests":["localhost/minikube-local-cache-test@sha256:e37fb7a2cf1c149a54457630d780e21b5d351854d8a6e0c685036c0ec1313f33"],"repoTags":["localhost/minikube-local-cache-test:functional-675813"],"size":"3330"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"58812704"},{"id":"71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":["registry.k8s.io/kube-proxy@sha256:b7d336a1c5e9719bafe8a97dbb2c503580b5ac898f3f40329fc98f6a1f0ea971","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"95949719"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"d7cd33d7d4ed1
cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660","repoDigests":["docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9","docker.io/library/nginx@sha256:37d07a7f2aef3a0cc9ca4aafd9331c0796e47536c06a1f7304f98d69816baed7"],"repoTags":["docker.io/library/nginx:alpine"],"size":"46671358"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-675813"],"size":"4788229"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"cd0f0ae0ec9e0cdc092
079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:74a8050ec347821b7884ab635f3e7883b5c570388ed8087ffd01fd9fe1cb39c6"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"92567005"},{"id":"fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":["registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808","registry.k8s.io/kube-scheduler@sha256:dd427ccac78f027990d5a00936681095842a0d813c70ecc2d4f65f3bd3beef77"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"67007814"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77
206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"235ff27fe79567e8ccaf4d26a2d24828a65898a83b97fba3c7e39ec4621e1b51","repoDigests":["docker.io/library/nginx@sha256:5543c3ce08bf5c9acf64bc054c6d7c161ae39ea8617be0f18cae3ac13df746a9","docker.io/library/nginx@sha256:98f8ec75657d21b924fe4f69b6b9bff2f6550ea48838af479d8894a852000e40"],"repoTags":["docker.io/library/nginx:latest"],"size":"197104786"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-675813 image ls --format json --alsologtostderr:
I0815 00:53:56.167843 1433786 out.go:291] Setting OutFile to fd 1 ...
I0815 00:53:56.167992 1433786 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:53:56.168004 1433786 out.go:304] Setting ErrFile to fd 2...
I0815 00:53:56.168009 1433786 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:53:56.168266 1433786 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-1398913/.minikube/bin
I0815 00:53:56.169114 1433786 config.go:182] Loaded profile config "functional-675813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 00:53:56.169276 1433786 config.go:182] Loaded profile config "functional-675813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 00:53:56.169876 1433786 cli_runner.go:164] Run: docker container inspect functional-675813 --format={{.State.Status}}
I0815 00:53:56.204280 1433786 ssh_runner.go:195] Run: systemctl --version
I0815 00:53:56.204343 1433786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-675813
I0815 00:53:56.227977 1433786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34610 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/functional-675813/id_rsa Username:docker}
I0815 00:53:56.324324 1433786 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-675813 image ls --format yaml --alsologtostderr:
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:ed8613b19e25d56d25e9ba0d83fd1e14f8ba070cb80e2674ba62ded55e260a9c
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "86930758"
- id: cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:74a8050ec347821b7884ab635f3e7883b5c570388ed8087ffd01fd9fe1cb39c6
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "92567005"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 235ff27fe79567e8ccaf4d26a2d24828a65898a83b97fba3c7e39ec4621e1b51
repoDigests:
- docker.io/library/nginx@sha256:5543c3ce08bf5c9acf64bc054c6d7c161ae39ea8617be0f18cae3ac13df746a9
- docker.io/library/nginx@sha256:98f8ec75657d21b924fe4f69b6b9bff2f6550ea48838af479d8894a852000e40
repoTags:
- docker.io/library/nginx:latest
size: "197104786"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: e8fc005310217d99c3594fdb9ab48e97f0cc6007e4ee283591a423bfda714613
repoDigests:
- localhost/minikube-local-cache-test@sha256:e37fb7a2cf1c149a54457630d780e21b5d351854d8a6e0c685036c0ec1313f33
repoTags:
- localhost/minikube-local-cache-test:functional-675813
size: "3330"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "58812704"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-675813
size: "4788229"
- id: d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:c26d1775b97b4ba3436f3cdc4d5c153b773ce2b3f5ad8e201f16b09e7182d63e
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "90290738"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660
repoDigests:
- docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9
- docker.io/library/nginx@sha256:37d07a7f2aef3a0cc9ca4aafd9331c0796e47536c06a1f7304f98d69816baed7
repoTags:
- docker.io/library/nginx:alpine
size: "46671358"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: 6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "90295858"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
- registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139912446"
- id: 71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b7d336a1c5e9719bafe8a97dbb2c503580b5ac898f3f40329fc98f6a1f0ea971
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "95949719"
- id: fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
- registry.k8s.io/kube-scheduler@sha256:dd427ccac78f027990d5a00936681095842a0d813c70ecc2d4f65f3bd3beef77
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "67007814"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-675813 image ls --format yaml --alsologtostderr:
I0815 00:53:55.909969 1433692 out.go:291] Setting OutFile to fd 1 ...
I0815 00:53:55.910202 1433692 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:53:55.910238 1433692 out.go:304] Setting ErrFile to fd 2...
I0815 00:53:55.910259 1433692 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:53:55.910616 1433692 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-1398913/.minikube/bin
I0815 00:53:55.911387 1433692 config.go:182] Loaded profile config "functional-675813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 00:53:55.911583 1433692 config.go:182] Loaded profile config "functional-675813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 00:53:55.912208 1433692 cli_runner.go:164] Run: docker container inspect functional-675813 --format={{.State.Status}}
I0815 00:53:55.933741 1433692 ssh_runner.go:195] Run: systemctl --version
I0815 00:53:55.933879 1433692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-675813
I0815 00:53:55.956931 1433692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34610 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/functional-675813/id_rsa Username:docker}
I0815 00:53:56.051692 1433692 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-675813 ssh pgrep buildkitd: exit status 1 (340.303998ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 image build -t localhost/my-image:functional-675813 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-675813 image build -t localhost/my-image:functional-675813 testdata/build --alsologtostderr: (2.375091356s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-675813 image build -t localhost/my-image:functional-675813 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8a7ec04c930
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-675813
--> 0025ca563c6
Successfully tagged localhost/my-image:functional-675813
0025ca563c6fadea240c71b4454f08015e5139c41b4a8ffe47af1d145962225a
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-675813 image build -t localhost/my-image:functional-675813 testdata/build --alsologtostderr:
I0815 00:53:56.262919 1433800 out.go:291] Setting OutFile to fd 1 ...
I0815 00:53:56.264194 1433800 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:53:56.264211 1433800 out.go:304] Setting ErrFile to fd 2...
I0815 00:53:56.264217 1433800 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:53:56.264481 1433800 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-1398913/.minikube/bin
I0815 00:53:56.265598 1433800 config.go:182] Loaded profile config "functional-675813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 00:53:56.266439 1433800 config.go:182] Loaded profile config "functional-675813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 00:53:56.267106 1433800 cli_runner.go:164] Run: docker container inspect functional-675813 --format={{.State.Status}}
I0815 00:53:56.285322 1433800 ssh_runner.go:195] Run: systemctl --version
I0815 00:53:56.285380 1433800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-675813
I0815 00:53:56.303779 1433800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34610 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/functional-675813/id_rsa Username:docker}
I0815 00:53:56.403918 1433800 build_images.go:161] Building image from path: /tmp/build.3181521933.tar
I0815 00:53:56.404001 1433800 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0815 00:53:56.414930 1433800 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3181521933.tar
I0815 00:53:56.418585 1433800 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3181521933.tar: stat -c "%s %y" /var/lib/minikube/build/build.3181521933.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3181521933.tar': No such file or directory
I0815 00:53:56.418639 1433800 ssh_runner.go:362] scp /tmp/build.3181521933.tar --> /var/lib/minikube/build/build.3181521933.tar (3072 bytes)
I0815 00:53:56.444175 1433800 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3181521933
I0815 00:53:56.453161 1433800 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3181521933 -xf /var/lib/minikube/build/build.3181521933.tar
I0815 00:53:56.467314 1433800 crio.go:315] Building image: /var/lib/minikube/build/build.3181521933
I0815 00:53:56.467393 1433800 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-675813 /var/lib/minikube/build/build.3181521933 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0815 00:53:58.537230 1433800 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-675813 /var/lib/minikube/build/build.3181521933 --cgroup-manager=cgroupfs: (2.069806824s)
I0815 00:53:58.537304 1433800 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3181521933
I0815 00:53:58.547888 1433800 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3181521933.tar
I0815 00:53:58.557487 1433800 build_images.go:217] Built localhost/my-image:functional-675813 from /tmp/build.3181521933.tar
I0815 00:53:58.557516 1433800 build_images.go:133] succeeded building to: functional-675813
I0815 00:53:58.557521 1433800 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-675813
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 image load --daemon kicbase/echo-server:functional-675813 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-675813 image load --daemon kicbase/echo-server:functional-675813 --alsologtostderr: (1.454450722s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 image ls
functional_test.go:451: (dbg) Done: out/minikube-linux-arm64 -p functional-675813 image ls: (2.067986337s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 image load --daemon kicbase/echo-server:functional-675813 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-675813
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 image load --daemon kicbase/echo-server:functional-675813 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 image ls
2024/08/15 00:53:52 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 image save kicbase/echo-server:functional-675813 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 image rm kicbase/echo-server:functional-675813 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-675813
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-675813 image save --daemon kicbase/echo-server:functional-675813 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-675813
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.70s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-675813
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-675813
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-675813
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (179.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-095774 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0815 00:54:10.753915 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:55:32.675288 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-095774 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m58.560337089s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (179.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-095774 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-095774 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-095774 -- rollout status deployment/busybox: (4.335845395s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-095774 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-095774 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-095774 -- exec busybox-7dff88458-jhcdf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-095774 -- exec busybox-7dff88458-kktjf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-095774 -- exec busybox-7dff88458-kv62j -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-095774 -- exec busybox-7dff88458-jhcdf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-095774 -- exec busybox-7dff88458-kktjf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-095774 -- exec busybox-7dff88458-kv62j -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-095774 -- exec busybox-7dff88458-jhcdf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-095774 -- exec busybox-7dff88458-kktjf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-095774 -- exec busybox-7dff88458-kv62j -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-095774 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-095774 -- exec busybox-7dff88458-jhcdf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-095774 -- exec busybox-7dff88458-jhcdf -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-095774 -- exec busybox-7dff88458-kktjf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-095774 -- exec busybox-7dff88458-kktjf -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-095774 -- exec busybox-7dff88458-kv62j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-095774 -- exec busybox-7dff88458-kv62j -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (38.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-095774 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-095774 -v=7 --alsologtostderr: (37.013577141s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-095774 status -v=7 --alsologtostderr: (1.014242001s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (38.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-095774 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0815 00:57:48.814545 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 cp testdata/cp-test.txt ha-095774:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 cp ha-095774:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2559114583/001/cp-test_ha-095774.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 cp ha-095774:/home/docker/cp-test.txt ha-095774-m02:/home/docker/cp-test_ha-095774_ha-095774-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774-m02 "sudo cat /home/docker/cp-test_ha-095774_ha-095774-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 cp ha-095774:/home/docker/cp-test.txt ha-095774-m03:/home/docker/cp-test_ha-095774_ha-095774-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774-m03 "sudo cat /home/docker/cp-test_ha-095774_ha-095774-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 cp ha-095774:/home/docker/cp-test.txt ha-095774-m04:/home/docker/cp-test_ha-095774_ha-095774-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774-m04 "sudo cat /home/docker/cp-test_ha-095774_ha-095774-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 cp testdata/cp-test.txt ha-095774-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 cp ha-095774-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2559114583/001/cp-test_ha-095774-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 cp ha-095774-m02:/home/docker/cp-test.txt ha-095774:/home/docker/cp-test_ha-095774-m02_ha-095774.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774 "sudo cat /home/docker/cp-test_ha-095774-m02_ha-095774.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 cp ha-095774-m02:/home/docker/cp-test.txt ha-095774-m03:/home/docker/cp-test_ha-095774-m02_ha-095774-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774-m03 "sudo cat /home/docker/cp-test_ha-095774-m02_ha-095774-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 cp ha-095774-m02:/home/docker/cp-test.txt ha-095774-m04:/home/docker/cp-test_ha-095774-m02_ha-095774-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774-m04 "sudo cat /home/docker/cp-test_ha-095774-m02_ha-095774-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 cp testdata/cp-test.txt ha-095774-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 cp ha-095774-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2559114583/001/cp-test_ha-095774-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 cp ha-095774-m03:/home/docker/cp-test.txt ha-095774:/home/docker/cp-test_ha-095774-m03_ha-095774.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774 "sudo cat /home/docker/cp-test_ha-095774-m03_ha-095774.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 cp ha-095774-m03:/home/docker/cp-test.txt ha-095774-m02:/home/docker/cp-test_ha-095774-m03_ha-095774-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774-m02 "sudo cat /home/docker/cp-test_ha-095774-m03_ha-095774-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 cp ha-095774-m03:/home/docker/cp-test.txt ha-095774-m04:/home/docker/cp-test_ha-095774-m03_ha-095774-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774-m04 "sudo cat /home/docker/cp-test_ha-095774-m03_ha-095774-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 cp testdata/cp-test.txt ha-095774-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 cp ha-095774-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2559114583/001/cp-test_ha-095774-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 cp ha-095774-m04:/home/docker/cp-test.txt ha-095774:/home/docker/cp-test_ha-095774-m04_ha-095774.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774 "sudo cat /home/docker/cp-test_ha-095774-m04_ha-095774.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 cp ha-095774-m04:/home/docker/cp-test.txt ha-095774-m02:/home/docker/cp-test_ha-095774-m04_ha-095774-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774-m04 "sudo cat /home/docker/cp-test.txt"
E0815 00:58:06.723639 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/functional-675813/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:58:06.730112 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/functional-675813/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:58:06.741940 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/functional-675813/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:58:06.763306 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/functional-675813/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774-m02 "sudo cat /home/docker/cp-test_ha-095774-m04_ha-095774-m02.txt"
E0815 00:58:06.804947 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/functional-675813/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:58:06.886301 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/functional-675813/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:58:07.047606 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/functional-675813/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 cp ha-095774-m04:/home/docker/cp-test.txt ha-095774-m03:/home/docker/cp-test_ha-095774-m04_ha-095774-m03.txt
E0815 00:58:07.369425 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/functional-675813/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 ssh -n ha-095774-m03 "sudo cat /home/docker/cp-test_ha-095774-m04_ha-095774-m03.txt"
E0815 00:58:08.014255 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/functional-675813/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 node stop m02 -v=7 --alsologtostderr
E0815 00:58:09.295633 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/functional-675813/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:58:11.857087 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/functional-675813/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:58:16.518167 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:58:16.978966 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/functional-675813/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-095774 node stop m02 -v=7 --alsologtostderr: (12.057601275s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-095774 status -v=7 --alsologtostderr: exit status 7 (701.779791ms)

                                                
                                                
-- stdout --
	ha-095774
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-095774-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-095774-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-095774-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 00:58:20.279026 1449652 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:58:20.279161 1449652 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:58:20.279167 1449652 out.go:304] Setting ErrFile to fd 2...
	I0815 00:58:20.279172 1449652 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:58:20.279541 1449652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-1398913/.minikube/bin
	I0815 00:58:20.279838 1449652 out.go:298] Setting JSON to false
	I0815 00:58:20.279861 1449652 mustload.go:65] Loading cluster: ha-095774
	I0815 00:58:20.281328 1449652 config.go:182] Loaded profile config "ha-095774": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:58:20.281380 1449652 status.go:255] checking status of ha-095774 ...
	I0815 00:58:20.281383 1449652 notify.go:220] Checking for updates...
	I0815 00:58:20.281982 1449652 cli_runner.go:164] Run: docker container inspect ha-095774 --format={{.State.Status}}
	I0815 00:58:20.299579 1449652 status.go:330] ha-095774 host status = "Running" (err=<nil>)
	I0815 00:58:20.299608 1449652 host.go:66] Checking if "ha-095774" exists ...
	I0815 00:58:20.299925 1449652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-095774
	I0815 00:58:20.318462 1449652 host.go:66] Checking if "ha-095774" exists ...
	I0815 00:58:20.318781 1449652 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:58:20.318835 1449652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774
	I0815 00:58:20.337927 1449652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34615 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/ha-095774/id_rsa Username:docker}
	I0815 00:58:20.445351 1449652 ssh_runner.go:195] Run: systemctl --version
	I0815 00:58:20.450686 1449652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:58:20.463841 1449652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 00:58:20.519265 1449652 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-08-15 00:58:20.509365179 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214908928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 00:58:20.519845 1449652 kubeconfig.go:125] found "ha-095774" server: "https://192.168.49.254:8443"
	I0815 00:58:20.519884 1449652 api_server.go:166] Checking apiserver status ...
	I0815 00:58:20.519926 1449652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:58:20.531201 1449652 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1357/cgroup
	I0815 00:58:20.540910 1449652 api_server.go:182] apiserver freezer: "7:freezer:/docker/19e21c0763342b7a8fc977bf33ca962c388b91d0111aff9f6d8bdbc4cc7ffde6/crio/crio-4da357b680fef55073bb909a8383687f62ef450d9ca81fe69d54e7364c6b7100"
	I0815 00:58:20.540984 1449652 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/19e21c0763342b7a8fc977bf33ca962c388b91d0111aff9f6d8bdbc4cc7ffde6/crio/crio-4da357b680fef55073bb909a8383687f62ef450d9ca81fe69d54e7364c6b7100/freezer.state
	I0815 00:58:20.550698 1449652 api_server.go:204] freezer state: "THAWED"
	I0815 00:58:20.550727 1449652 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0815 00:58:20.558428 1449652 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0815 00:58:20.558457 1449652 status.go:422] ha-095774 apiserver status = Running (err=<nil>)
	I0815 00:58:20.558470 1449652 status.go:257] ha-095774 status: &{Name:ha-095774 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:58:20.558487 1449652 status.go:255] checking status of ha-095774-m02 ...
	I0815 00:58:20.558801 1449652 cli_runner.go:164] Run: docker container inspect ha-095774-m02 --format={{.State.Status}}
	I0815 00:58:20.575304 1449652 status.go:330] ha-095774-m02 host status = "Stopped" (err=<nil>)
	I0815 00:58:20.575337 1449652 status.go:343] host is not running, skipping remaining checks
	I0815 00:58:20.575345 1449652 status.go:257] ha-095774-m02 status: &{Name:ha-095774-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:58:20.575366 1449652 status.go:255] checking status of ha-095774-m03 ...
	I0815 00:58:20.575693 1449652 cli_runner.go:164] Run: docker container inspect ha-095774-m03 --format={{.State.Status}}
	I0815 00:58:20.592839 1449652 status.go:330] ha-095774-m03 host status = "Running" (err=<nil>)
	I0815 00:58:20.592861 1449652 host.go:66] Checking if "ha-095774-m03" exists ...
	I0815 00:58:20.593324 1449652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-095774-m03
	I0815 00:58:20.610054 1449652 host.go:66] Checking if "ha-095774-m03" exists ...
	I0815 00:58:20.610378 1449652 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:58:20.610446 1449652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774-m03
	I0815 00:58:20.627733 1449652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34625 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/ha-095774-m03/id_rsa Username:docker}
	I0815 00:58:20.719528 1449652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:58:20.731620 1449652 kubeconfig.go:125] found "ha-095774" server: "https://192.168.49.254:8443"
	I0815 00:58:20.731650 1449652 api_server.go:166] Checking apiserver status ...
	I0815 00:58:20.731691 1449652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:58:20.743030 1449652 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1299/cgroup
	I0815 00:58:20.753431 1449652 api_server.go:182] apiserver freezer: "7:freezer:/docker/769ac9dfcff6b6da325bce14e9f32f6d900fda854b36b7d4e504a07be31bb174/crio/crio-085544b3e7256ca1e7b29aab948b47b8e43f18b9d1dfda2f17e559019c2c1e0d"
	I0815 00:58:20.753514 1449652 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/769ac9dfcff6b6da325bce14e9f32f6d900fda854b36b7d4e504a07be31bb174/crio/crio-085544b3e7256ca1e7b29aab948b47b8e43f18b9d1dfda2f17e559019c2c1e0d/freezer.state
	I0815 00:58:20.762457 1449652 api_server.go:204] freezer state: "THAWED"
	I0815 00:58:20.762541 1449652 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0815 00:58:20.770451 1449652 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0815 00:58:20.770480 1449652 status.go:422] ha-095774-m03 apiserver status = Running (err=<nil>)
	I0815 00:58:20.770491 1449652 status.go:257] ha-095774-m03 status: &{Name:ha-095774-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:58:20.770508 1449652 status.go:255] checking status of ha-095774-m04 ...
	I0815 00:58:20.770817 1449652 cli_runner.go:164] Run: docker container inspect ha-095774-m04 --format={{.State.Status}}
	I0815 00:58:20.787015 1449652 status.go:330] ha-095774-m04 host status = "Running" (err=<nil>)
	I0815 00:58:20.787042 1449652 host.go:66] Checking if "ha-095774-m04" exists ...
	I0815 00:58:20.787343 1449652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-095774-m04
	I0815 00:58:20.803776 1449652 host.go:66] Checking if "ha-095774-m04" exists ...
	I0815 00:58:20.804083 1449652 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:58:20.804137 1449652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-095774-m04
	I0815 00:58:20.819988 1449652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34630 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/ha-095774-m04/id_rsa Username:docker}
	I0815 00:58:20.911587 1449652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:58:20.925649 1449652 status.go:257] ha-095774-m04 status: &{Name:ha-095774-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (20.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 node start m02 -v=7 --alsologtostderr
E0815 00:58:27.220752 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/functional-675813/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-095774 node start m02 -v=7 --alsologtostderr: (19.301229783s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-095774 status -v=7 --alsologtostderr: (1.499970323s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (20.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (16.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0815 00:58:47.702865 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/functional-675813/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (16.583847861s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (16.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (281.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-095774 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-095774 -v=7 --alsologtostderr
E0815 00:59:28.665390 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/functional-675813/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-095774 -v=7 --alsologtostderr: (37.183517735s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-095774 --wait=true -v=7 --alsologtostderr
E0815 01:00:50.587448 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/functional-675813/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:02:48.814500 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:03:06.724421 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/functional-675813/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:03:34.428734 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/functional-675813/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-095774 --wait=true -v=7 --alsologtostderr: (4m4.29116428s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-095774
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (281.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-095774 node delete m03 -v=7 --alsologtostderr: (11.639629407s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-095774 stop -v=7 --alsologtostderr: (35.752273354s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-095774 status -v=7 --alsologtostderr: exit status 7 (104.993657ms)

                                                
                                                
-- stdout --
	ha-095774
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-095774-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-095774-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 01:04:29.665005 1464650 out.go:291] Setting OutFile to fd 1 ...
	I0815 01:04:29.665131 1464650 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:04:29.665141 1464650 out.go:304] Setting ErrFile to fd 2...
	I0815 01:04:29.665147 1464650 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:04:29.665390 1464650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-1398913/.minikube/bin
	I0815 01:04:29.665568 1464650 out.go:298] Setting JSON to false
	I0815 01:04:29.665604 1464650 mustload.go:65] Loading cluster: ha-095774
	I0815 01:04:29.665688 1464650 notify.go:220] Checking for updates...
	I0815 01:04:29.666042 1464650 config.go:182] Loaded profile config "ha-095774": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:04:29.666090 1464650 status.go:255] checking status of ha-095774 ...
	I0815 01:04:29.666669 1464650 cli_runner.go:164] Run: docker container inspect ha-095774 --format={{.State.Status}}
	I0815 01:04:29.684927 1464650 status.go:330] ha-095774 host status = "Stopped" (err=<nil>)
	I0815 01:04:29.684950 1464650 status.go:343] host is not running, skipping remaining checks
	I0815 01:04:29.684957 1464650 status.go:257] ha-095774 status: &{Name:ha-095774 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 01:04:29.684990 1464650 status.go:255] checking status of ha-095774-m02 ...
	I0815 01:04:29.685311 1464650 cli_runner.go:164] Run: docker container inspect ha-095774-m02 --format={{.State.Status}}
	I0815 01:04:29.707935 1464650 status.go:330] ha-095774-m02 host status = "Stopped" (err=<nil>)
	I0815 01:04:29.707961 1464650 status.go:343] host is not running, skipping remaining checks
	I0815 01:04:29.707969 1464650 status.go:257] ha-095774-m02 status: &{Name:ha-095774-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 01:04:29.707988 1464650 status.go:255] checking status of ha-095774-m04 ...
	I0815 01:04:29.708289 1464650 cli_runner.go:164] Run: docker container inspect ha-095774-m04 --format={{.State.Status}}
	I0815 01:04:29.725545 1464650 status.go:330] ha-095774-m04 host status = "Stopped" (err=<nil>)
	I0815 01:04:29.725571 1464650 status.go:343] host is not running, skipping remaining checks
	I0815 01:04:29.725579 1464650 status.go:257] ha-095774-m04 status: &{Name:ha-095774-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (74.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-095774 --control-plane -v=7 --alsologtostderr
E0815 01:07:48.814943 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-095774 --control-plane -v=7 --alsologtostderr: (1m13.213305249s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-095774 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (74.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.81s)

                                                
                                    
x
+
TestJSONOutput/start/Command (48.5s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-882896 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-882896 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (48.498053776s)
--- PASS: TestJSONOutput/start/Command (48.50s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-882896 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-882896 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-882896 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-882896 --output=json --user=testUser: (5.865737876s)
--- PASS: TestJSONOutput/stop/Command (5.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-253397 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-253397 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (71.473123ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"75bc5129-7f37-453c-9e4c-75cfd85e7613","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-253397] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a14cc3e9-da7c-4061-82a5-bf4eade1063b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19443"}}
	{"specversion":"1.0","id":"e2369a0d-4952-487f-8f2b-78b5269f7d27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0b22f3cb-52ee-4461-9851-1d0a3df076c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19443-1398913/kubeconfig"}}
	{"specversion":"1.0","id":"4943d87a-8353-42f3-9378-ca0c46612061","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-1398913/.minikube"}}
	{"specversion":"1.0","id":"61e3ca2c-9925-45d5-bd03-49804ddf4997","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"040b8c43-ce49-4d59-b509-7da087953618","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"682d4011-4438-4996-ad03-64cf8f44b5fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-253397" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-253397
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.01s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-186473 --network=
E0815 01:09:11.879528 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-186473 --network=: (38.961847494s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-186473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-186473
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-186473: (2.014841535s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.01s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.14s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-773096 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-773096 --network=bridge: (32.120982554s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-773096" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-773096
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-773096: (1.997150603s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.14s)

                                                
                                    
x
+
TestKicExistingNetwork (34.5s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-847287 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-847287 --network=existing-network: (32.362083548s)
helpers_test.go:175: Cleaning up "existing-network-847287" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-847287
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-847287: (1.986951509s)
--- PASS: TestKicExistingNetwork (34.50s)

                                                
                                    
x
+
TestKicCustomSubnet (34.03s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-690969 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-690969 --subnet=192.168.60.0/24: (31.877988662s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-690969 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-690969" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-690969
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-690969: (2.127724155s)
--- PASS: TestKicCustomSubnet (34.03s)

                                                
                                    
x
+
TestKicStaticIP (35.03s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-938646 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-938646 --static-ip=192.168.200.200: (32.728722198s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-938646 ip
helpers_test.go:175: Cleaning up "static-ip-938646" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-938646
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-938646: (2.141144534s)
--- PASS: TestKicStaticIP (35.03s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (70.57s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-953751 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-953751 --driver=docker  --container-runtime=crio: (30.610953672s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-956869 --driver=docker  --container-runtime=crio
E0815 01:12:48.814419 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:13:06.724354 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/functional-675813/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-956869 --driver=docker  --container-runtime=crio: (34.285393891s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-953751
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-956869
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-956869" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-956869
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-956869: (2.104937504s)
helpers_test.go:175: Cleaning up "first-953751" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-953751
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-953751: (2.308727432s)
--- PASS: TestMinikubeProfile (70.57s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-170419 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-170419 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.63737003s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-170419 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.47s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-183883 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-183883 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.473722192s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-183883 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-170419 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-170419 --alsologtostderr -v=5: (1.622630161s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-183883 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-183883
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-183883: (1.204593646s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.11s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-183883
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-183883: (7.107601683s)
--- PASS: TestMountStart/serial/RestartStopped (8.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-183883 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (77.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-718719 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0815 01:14:29.791143 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/functional-675813/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-718719 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m17.228331414s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (77.74s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-718719 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-718719 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-718719 -- rollout status deployment/busybox: (2.812706949s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-718719 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-718719 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-718719 -- exec busybox-7dff88458-cj7fp -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-718719 -- exec busybox-7dff88458-q8zv6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-718719 -- exec busybox-7dff88458-cj7fp -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-718719 -- exec busybox-7dff88458-q8zv6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-718719 -- exec busybox-7dff88458-cj7fp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-718719 -- exec busybox-7dff88458-q8zv6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.80s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-718719 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-718719 -- exec busybox-7dff88458-cj7fp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-718719 -- exec busybox-7dff88458-cj7fp -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-718719 -- exec busybox-7dff88458-q8zv6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-718719 -- exec busybox-7dff88458-q8zv6 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (31.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-718719 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-718719 -v 3 --alsologtostderr: (31.050241941s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (31.72s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-718719 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 cp testdata/cp-test.txt multinode-718719:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 ssh -n multinode-718719 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 cp multinode-718719:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2562491979/001/cp-test_multinode-718719.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 ssh -n multinode-718719 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 cp multinode-718719:/home/docker/cp-test.txt multinode-718719-m02:/home/docker/cp-test_multinode-718719_multinode-718719-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 ssh -n multinode-718719 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 ssh -n multinode-718719-m02 "sudo cat /home/docker/cp-test_multinode-718719_multinode-718719-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 cp multinode-718719:/home/docker/cp-test.txt multinode-718719-m03:/home/docker/cp-test_multinode-718719_multinode-718719-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 ssh -n multinode-718719 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 ssh -n multinode-718719-m03 "sudo cat /home/docker/cp-test_multinode-718719_multinode-718719-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 cp testdata/cp-test.txt multinode-718719-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 ssh -n multinode-718719-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 cp multinode-718719-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2562491979/001/cp-test_multinode-718719-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 ssh -n multinode-718719-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 cp multinode-718719-m02:/home/docker/cp-test.txt multinode-718719:/home/docker/cp-test_multinode-718719-m02_multinode-718719.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 ssh -n multinode-718719-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 ssh -n multinode-718719 "sudo cat /home/docker/cp-test_multinode-718719-m02_multinode-718719.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 cp multinode-718719-m02:/home/docker/cp-test.txt multinode-718719-m03:/home/docker/cp-test_multinode-718719-m02_multinode-718719-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 ssh -n multinode-718719-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 ssh -n multinode-718719-m03 "sudo cat /home/docker/cp-test_multinode-718719-m02_multinode-718719-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 cp testdata/cp-test.txt multinode-718719-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 ssh -n multinode-718719-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 cp multinode-718719-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2562491979/001/cp-test_multinode-718719-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 ssh -n multinode-718719-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 cp multinode-718719-m03:/home/docker/cp-test.txt multinode-718719:/home/docker/cp-test_multinode-718719-m03_multinode-718719.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 ssh -n multinode-718719-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 ssh -n multinode-718719 "sudo cat /home/docker/cp-test_multinode-718719-m03_multinode-718719.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 cp multinode-718719-m03:/home/docker/cp-test.txt multinode-718719-m02:/home/docker/cp-test_multinode-718719-m03_multinode-718719-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 ssh -n multinode-718719-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 ssh -n multinode-718719-m02 "sudo cat /home/docker/cp-test_multinode-718719-m03_multinode-718719-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-718719 node stop m03: (1.210026672s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-718719 status: exit status 7 (534.454715ms)

                                                
                                                
-- stdout --
	multinode-718719
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-718719-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-718719-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-718719 status --alsologtostderr: exit status 7 (496.291701ms)

                                                
                                                
-- stdout --
	multinode-718719
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-718719-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-718719-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 01:15:54.527718 1519075 out.go:291] Setting OutFile to fd 1 ...
	I0815 01:15:54.527931 1519075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:15:54.527956 1519075 out.go:304] Setting ErrFile to fd 2...
	I0815 01:15:54.527975 1519075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:15:54.528532 1519075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-1398913/.minikube/bin
	I0815 01:15:54.528790 1519075 out.go:298] Setting JSON to false
	I0815 01:15:54.528848 1519075 mustload.go:65] Loading cluster: multinode-718719
	I0815 01:15:54.528938 1519075 notify.go:220] Checking for updates...
	I0815 01:15:54.529356 1519075 config.go:182] Loaded profile config "multinode-718719": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:15:54.529390 1519075 status.go:255] checking status of multinode-718719 ...
	I0815 01:15:54.530231 1519075 cli_runner.go:164] Run: docker container inspect multinode-718719 --format={{.State.Status}}
	I0815 01:15:54.547710 1519075 status.go:330] multinode-718719 host status = "Running" (err=<nil>)
	I0815 01:15:54.547734 1519075 host.go:66] Checking if "multinode-718719" exists ...
	I0815 01:15:54.548045 1519075 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-718719
	I0815 01:15:54.564664 1519075 host.go:66] Checking if "multinode-718719" exists ...
	I0815 01:15:54.565048 1519075 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 01:15:54.565104 1519075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-718719
	I0815 01:15:54.581918 1519075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34735 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/multinode-718719/id_rsa Username:docker}
	I0815 01:15:54.679738 1519075 ssh_runner.go:195] Run: systemctl --version
	I0815 01:15:54.683961 1519075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:15:54.695380 1519075 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 01:15:54.748793 1519075 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-15 01:15:54.73893445 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214908928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 01:15:54.749487 1519075 kubeconfig.go:125] found "multinode-718719" server: "https://192.168.67.2:8443"
	I0815 01:15:54.749529 1519075 api_server.go:166] Checking apiserver status ...
	I0815 01:15:54.749575 1519075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:15:54.760996 1519075 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1367/cgroup
	I0815 01:15:54.771175 1519075 api_server.go:182] apiserver freezer: "7:freezer:/docker/f9f224ed59f6a6fbcda387df5ee228ff814223686d59a380fe9583d7779ef01a/crio/crio-962af62832b8ebb587e21a50f87c37a54691a78ae6d502dd54b5c67608564ff3"
	I0815 01:15:54.771243 1519075 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f9f224ed59f6a6fbcda387df5ee228ff814223686d59a380fe9583d7779ef01a/crio/crio-962af62832b8ebb587e21a50f87c37a54691a78ae6d502dd54b5c67608564ff3/freezer.state
	I0815 01:15:54.780296 1519075 api_server.go:204] freezer state: "THAWED"
	I0815 01:15:54.780325 1519075 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0815 01:15:54.789454 1519075 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0815 01:15:54.789482 1519075 status.go:422] multinode-718719 apiserver status = Running (err=<nil>)
	I0815 01:15:54.789492 1519075 status.go:257] multinode-718719 status: &{Name:multinode-718719 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 01:15:54.789509 1519075 status.go:255] checking status of multinode-718719-m02 ...
	I0815 01:15:54.789842 1519075 cli_runner.go:164] Run: docker container inspect multinode-718719-m02 --format={{.State.Status}}
	I0815 01:15:54.805786 1519075 status.go:330] multinode-718719-m02 host status = "Running" (err=<nil>)
	I0815 01:15:54.805809 1519075 host.go:66] Checking if "multinode-718719-m02" exists ...
	I0815 01:15:54.806313 1519075 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-718719-m02
	I0815 01:15:54.825904 1519075 host.go:66] Checking if "multinode-718719-m02" exists ...
	I0815 01:15:54.826283 1519075 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 01:15:54.826341 1519075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-718719-m02
	I0815 01:15:54.842105 1519075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34740 SSHKeyPath:/home/jenkins/minikube-integration/19443-1398913/.minikube/machines/multinode-718719-m02/id_rsa Username:docker}
	I0815 01:15:54.935962 1519075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:15:54.948180 1519075 status.go:257] multinode-718719-m02 status: &{Name:multinode-718719-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0815 01:15:54.948216 1519075 status.go:255] checking status of multinode-718719-m03 ...
	I0815 01:15:54.948512 1519075 cli_runner.go:164] Run: docker container inspect multinode-718719-m03 --format={{.State.Status}}
	I0815 01:15:54.966112 1519075 status.go:330] multinode-718719-m03 host status = "Stopped" (err=<nil>)
	I0815 01:15:54.966136 1519075 status.go:343] host is not running, skipping remaining checks
	I0815 01:15:54.966144 1519075 status.go:257] multinode-718719-m03 status: &{Name:multinode-718719-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-718719 node start m03 -v=7 --alsologtostderr: (9.737575034s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.53s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (80.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-718719
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-718719
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-718719: (24.817404131s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-718719 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-718719 --wait=true -v=8 --alsologtostderr: (55.482068021s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-718719
--- PASS: TestMultiNode/serial/RestartKeepsNodes (80.42s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-718719 node delete m03: (4.642372549s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 stop
E0815 01:17:48.814466 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-718719 stop: (23.718595872s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-718719 status: exit status 7 (98.941219ms)

                                                
                                                
-- stdout --
	multinode-718719
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-718719-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-718719 status --alsologtostderr: exit status 7 (90.278381ms)

                                                
                                                
-- stdout --
	multinode-718719
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-718719-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 01:17:55.091086 1526539 out.go:291] Setting OutFile to fd 1 ...
	I0815 01:17:55.091272 1526539 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:17:55.091284 1526539 out.go:304] Setting ErrFile to fd 2...
	I0815 01:17:55.091290 1526539 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:17:55.091586 1526539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-1398913/.minikube/bin
	I0815 01:17:55.091793 1526539 out.go:298] Setting JSON to false
	I0815 01:17:55.091830 1526539 mustload.go:65] Loading cluster: multinode-718719
	I0815 01:17:55.091971 1526539 notify.go:220] Checking for updates...
	I0815 01:17:55.092347 1526539 config.go:182] Loaded profile config "multinode-718719": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:17:55.092360 1526539 status.go:255] checking status of multinode-718719 ...
	I0815 01:17:55.093256 1526539 cli_runner.go:164] Run: docker container inspect multinode-718719 --format={{.State.Status}}
	I0815 01:17:55.110544 1526539 status.go:330] multinode-718719 host status = "Stopped" (err=<nil>)
	I0815 01:17:55.110568 1526539 status.go:343] host is not running, skipping remaining checks
	I0815 01:17:55.110576 1526539 status.go:257] multinode-718719 status: &{Name:multinode-718719 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 01:17:55.110614 1526539 status.go:255] checking status of multinode-718719-m02 ...
	I0815 01:17:55.110966 1526539 cli_runner.go:164] Run: docker container inspect multinode-718719-m02 --format={{.State.Status}}
	I0815 01:17:55.129551 1526539 status.go:330] multinode-718719-m02 host status = "Stopped" (err=<nil>)
	I0815 01:17:55.129575 1526539 status.go:343] host is not running, skipping remaining checks
	I0815 01:17:55.129604 1526539 status.go:257] multinode-718719-m02 status: &{Name:multinode-718719-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.91s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-718719 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0815 01:18:06.724301 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/functional-675813/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-718719 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (56.199875004s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-718719 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.87s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-718719
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-718719-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-718719-m02 --driver=docker  --container-runtime=crio: exit status 14 (82.859ms)

                                                
                                                
-- stdout --
	* [multinode-718719-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19443
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19443-1398913/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-1398913/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-718719-m02' is duplicated with machine name 'multinode-718719-m02' in profile 'multinode-718719'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-718719-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-718719-m03 --driver=docker  --container-runtime=crio: (31.226013006s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-718719
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-718719: exit status 80 (331.446642ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-718719 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-718719-m03 already exists in multinode-718719-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-718719-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-718719-m03: (1.958311084s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.66s)

                                                
                                    
x
+
TestPreload (130.32s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-080332 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-080332 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m35.426079424s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-080332 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-080332 image pull gcr.io/k8s-minikube/busybox: (2.058193597s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-080332
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-080332: (5.79975145s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-080332 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-080332 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (24.221384733s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-080332 image list
helpers_test.go:175: Cleaning up "test-preload-080332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-080332
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-080332: (2.491603906s)
--- PASS: TestPreload (130.32s)

                                                
                                    
x
+
TestScheduledStopUnix (107s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-269756 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-269756 --memory=2048 --driver=docker  --container-runtime=crio: (30.697985338s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-269756 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-269756 -n scheduled-stop-269756
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-269756 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-269756 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-269756 -n scheduled-stop-269756
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-269756
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-269756 --schedule 15s
E0815 01:22:48.814228 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0815 01:23:06.724389 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/functional-675813/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-269756
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-269756: exit status 7 (72.367109ms)

                                                
                                                
-- stdout --
	scheduled-stop-269756
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-269756 -n scheduled-stop-269756
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-269756 -n scheduled-stop-269756: exit status 7 (67.729296ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-269756" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-269756
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-269756: (4.745929296s)
--- PASS: TestScheduledStopUnix (107.00s)

                                                
                                    
x
+
TestInsufficientStorage (10.59s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-446799 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-446799 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.130235723s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1bb617a1-8c56-407f-a7ce-61be81155d3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-446799] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a9f2cf4d-a773-497e-ba9a-520e0188d155","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19443"}}
	{"specversion":"1.0","id":"690d74d2-4b9f-43d2-9f1e-3242aa62a802","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"66bc0934-2a95-4224-8843-5af8ab905560","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19443-1398913/kubeconfig"}}
	{"specversion":"1.0","id":"21334ff3-4f2e-43ed-ad32-d0dcc2399be7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-1398913/.minikube"}}
	{"specversion":"1.0","id":"efc6c21f-94ee-42f8-93f1-81c7e4b1f1f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"30582308-c45a-4dda-bd36-41043b1bd13f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9245346e-1b1b-4739-9347-dc03e73010cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"1fee3b0e-ffd3-4072-a0c9-69654af2f739","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8ee2400e-d987-4e3d-b003-e7865049f11b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9f8b6815-61f1-4647-886b-4329dd1f7c78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e4fda39b-42ad-4116-bf78-f516af43d280","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-446799\" primary control-plane node in \"insufficient-storage-446799\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d64e1eec-0976-46a0-9309-3618d53dcdb1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1723650208-19443 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7f93cc56-a519-435f-a0fc-f65c251d71cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"eb937992-d3b9-487d-9117-f84b1a742fd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-446799 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-446799 --output=json --layout=cluster: exit status 7 (279.047764ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-446799","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-446799","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 01:23:35.399548 1544252 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-446799" does not appear in /home/jenkins/minikube-integration/19443-1398913/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-446799 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-446799 --output=json --layout=cluster: exit status 7 (280.196588ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-446799","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-446799","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 01:23:35.682483 1544317 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-446799" does not appear in /home/jenkins/minikube-integration/19443-1398913/kubeconfig
	E0815 01:23:35.692861 1544317 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/insufficient-storage-446799/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-446799" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-446799
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-446799: (1.901755479s)
--- PASS: TestInsufficientStorage (10.59s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (74.15s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.311430945 start -p running-upgrade-405420 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.311430945 start -p running-upgrade-405420 --memory=2200 --vm-driver=docker  --container-runtime=crio: (34.609031291s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-405420 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-405420 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.300338179s)
helpers_test.go:175: Cleaning up "running-upgrade-405420" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-405420
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-405420: (2.811159612s)
--- PASS: TestRunningBinaryUpgrade (74.15s)

                                                
                                    
x
+
TestKubernetesUpgrade (382.04s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-349593 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-349593 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m6.53217108s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-349593
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-349593: (1.318079629s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-349593 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-349593 status --format={{.Host}}: exit status 7 (86.784779ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-349593 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-349593 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m35.929532679s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-349593 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-349593 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-349593 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (107.122767ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-349593] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19443
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19443-1398913/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-1398913/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-349593
	    minikube start -p kubernetes-upgrade-349593 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3495932 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-349593 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-349593 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-349593 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.552657651s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-349593" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-349593
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-349593: (2.3467656s)
--- PASS: TestKubernetesUpgrade (382.04s)

                                                
                                    
x
+
TestMissingContainerUpgrade (172.63s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3734287343 start -p missing-upgrade-609335 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3734287343 start -p missing-upgrade-609335 --memory=2200 --driver=docker  --container-runtime=crio: (1m32.345033438s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-609335
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-609335: (10.413051508s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-609335
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-609335 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0815 01:25:51.881600 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-609335 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m6.173112778s)
helpers_test.go:175: Cleaning up "missing-upgrade-609335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-609335
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-609335: (2.106905357s)
--- PASS: TestMissingContainerUpgrade (172.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-272058 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-272058 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (73.255628ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-272058] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19443
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19443-1398913/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-1398913/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-272058 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-272058 --driver=docker  --container-runtime=crio: (39.335418319s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-272058 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (10.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-272058 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-272058 --no-kubernetes --driver=docker  --container-runtime=crio: (8.053222722s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-272058 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-272058 status -o json: exit status 2 (432.554452ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-272058","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-272058
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-272058: (2.071555288s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (10.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-272058 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-272058 --no-kubernetes --driver=docker  --container-runtime=crio: (6.884672434s)
--- PASS: TestNoKubernetes/serial/Start (6.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-272058 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-272058 "sudo systemctl is-active --quiet service kubelet": exit status 1 (340.346917ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-272058
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-272058: (1.268954937s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-272058 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-272058 --driver=docker  --container-runtime=crio: (7.810024287s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-272058 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-272058 "sudo systemctl is-active --quiet service kubelet": exit status 1 (369.268535ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (99.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.572779379 start -p stopped-upgrade-324488 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.572779379 start -p stopped-upgrade-324488 --memory=2200 --vm-driver=docker  --container-runtime=crio: (47.200513398s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.572779379 -p stopped-upgrade-324488 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.572779379 -p stopped-upgrade-324488 stop: (2.276159252s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-324488 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0815 01:27:48.814651 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:28:06.724399 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/functional-675813/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-324488 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (50.457915368s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (99.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-324488
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-324488: (1.319116513s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.32s)

                                                
                                    
x
+
TestPause/serial/Start (52.09s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-601620 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-601620 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (52.084149792s)
--- PASS: TestPause/serial/Start (52.09s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (26.52s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-601620 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-601620 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.513033191s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (26.52s)

                                                
                                    
x
+
TestPause/serial/Pause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-601620 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.76s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-601620 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-601620 --output=json --layout=cluster: exit status 2 (341.494767ms)

                                                
                                                
-- stdout --
	{"Name":"pause-601620","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-601620","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-601620 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.71s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.94s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-601620 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.94s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.84s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-601620 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-601620 --alsologtostderr -v=5: (2.838106041s)
--- PASS: TestPause/serial/DeletePaused (2.84s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.48s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-601620
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-601620: exit status 1 (20.044257ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-601620: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-973436 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-973436 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (254.483251ms)

                                                
                                                
-- stdout --
	* [false-973436] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19443
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19443-1398913/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-1398913/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 01:31:13.838695 1583793 out.go:291] Setting OutFile to fd 1 ...
	I0815 01:31:13.838926 1583793 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:31:13.838955 1583793 out.go:304] Setting ErrFile to fd 2...
	I0815 01:31:13.838975 1583793 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:31:13.839320 1583793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-1398913/.minikube/bin
	I0815 01:31:13.839851 1583793 out.go:298] Setting JSON to false
	I0815 01:31:13.840886 1583793 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":36816,"bootTime":1723648658,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0815 01:31:13.841004 1583793 start.go:139] virtualization:  
	I0815 01:31:13.843718 1583793 out.go:177] * [false-973436] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0815 01:31:13.845430 1583793 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 01:31:13.845501 1583793 notify.go:220] Checking for updates...
	I0815 01:31:13.848144 1583793 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 01:31:13.849690 1583793 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-1398913/kubeconfig
	I0815 01:31:13.851194 1583793 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-1398913/.minikube
	I0815 01:31:13.852761 1583793 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0815 01:31:13.854337 1583793 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 01:31:13.856512 1583793 config.go:182] Loaded profile config "force-systemd-flag-224943": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:31:13.856613 1583793 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 01:31:13.912479 1583793 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 01:31:13.912610 1583793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 01:31:14.012161 1583793 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-15 01:31:13.995600999 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214908928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 01:31:14.012297 1583793 docker.go:307] overlay module found
	I0815 01:31:14.014523 1583793 out.go:177] * Using the docker driver based on user configuration
	I0815 01:31:14.016686 1583793 start.go:297] selected driver: docker
	I0815 01:31:14.016716 1583793 start.go:901] validating driver "docker" against <nil>
	I0815 01:31:14.016750 1583793 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 01:31:14.019314 1583793 out.go:177] 
	W0815 01:31:14.021284 1583793 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0815 01:31:14.022989 1583793 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-973436 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-973436

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-973436

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-973436

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-973436

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-973436

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-973436

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-973436

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-973436

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-973436

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-973436

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-973436

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-973436" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-973436" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-973436

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973436"

                                                
                                                
----------------------- debugLogs end: false-973436 [took: 4.080367689s] --------------------------------
helpers_test.go:175: Cleaning up "false-973436" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-973436
--- PASS: TestNetworkPlugins/group/false (4.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (149.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-643844 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0815 01:32:48.814515 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:33:06.723608 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/functional-675813/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-643844 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m29.588996184s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (149.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-643844 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b2b7aa53-9000-410f-951d-d804635be675] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b2b7aa53-9000-410f-951d-d804635be675] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.005072115s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-643844 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-643844 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-643844 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-643844 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-643844 --alsologtostderr -v=3: (12.11773597s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-643844 -n old-k8s-version-643844
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-643844 -n old-k8s-version-643844: exit status 7 (68.9426ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-643844 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (152.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-643844 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-643844 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m31.783323368s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-643844 -n old-k8s-version-643844
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (152.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (69.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-302253 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-302253 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (1m9.237995111s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (69.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-302253 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [266932d1-4c4a-4a5d-9aaf-d057b4305cf1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [266932d1-4c4a-4a5d-9aaf-d057b4305cf1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004349444s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-302253 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-302253 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-302253 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.077438366s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-302253 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-302253 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-302253 --alsologtostderr -v=3: (11.985437895s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-302253 -n no-preload-302253
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-302253 -n no-preload-302253: exit status 7 (78.365154ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-302253 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (267.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-302253 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0815 01:37:48.814509 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-302253 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m26.901124347s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-302253 -n no-preload-302253
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (267.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-mxsvg" [0406ba37-616a-44e0-b55c-1ea5c5a0d12d] Running
E0815 01:38:06.724614 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/functional-675813/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005234221s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-mxsvg" [0406ba37-616a-44e0-b55c-1ea5c5a0d12d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004367444s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-643844 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-643844 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-643844 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-643844 -n old-k8s-version-643844
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-643844 -n old-k8s-version-643844: exit status 2 (325.873921ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-643844 -n old-k8s-version-643844
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-643844 -n old-k8s-version-643844: exit status 2 (345.564578ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-643844 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-643844 -n old-k8s-version-643844
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-643844 -n old-k8s-version-643844
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (49.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-681757 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-681757 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (49.155540052s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (49.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-681757 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [677eca06-3d1a-47a4-ae2c-635958bec722] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [677eca06-3d1a-47a4-ae2c-635958bec722] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003684196s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-681757 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-681757 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-681757 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.025805916s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-681757 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-681757 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-681757 --alsologtostderr -v=3: (11.97227701s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-681757 -n embed-certs-681757
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-681757 -n embed-certs-681757: exit status 7 (73.925553ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-681757 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (301.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-681757 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0815 01:40:08.147623 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/old-k8s-version-643844/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:40:08.154076 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/old-k8s-version-643844/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:40:08.165573 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/old-k8s-version-643844/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:40:08.187028 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/old-k8s-version-643844/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:40:08.228462 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/old-k8s-version-643844/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:40:08.309991 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/old-k8s-version-643844/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:40:08.471608 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/old-k8s-version-643844/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:40:08.793325 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/old-k8s-version-643844/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:40:09.435431 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/old-k8s-version-643844/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:40:10.717357 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/old-k8s-version-643844/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:40:13.279971 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/old-k8s-version-643844/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:40:18.401326 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/old-k8s-version-643844/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:40:28.643610 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/old-k8s-version-643844/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:40:49.125123 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/old-k8s-version-643844/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:41:30.086955 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/old-k8s-version-643844/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-681757 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (5m1.099498652s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-681757 -n embed-certs-681757
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (301.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-whc5c" [7e9cbc53-185b-493c-80f6-b4108e400d0c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005035766s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-whc5c" [7e9cbc53-185b-493c-80f6-b4108e400d0c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004108265s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-302253 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-302253 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-302253 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-302253 --alsologtostderr -v=1: (1.187456784s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-302253 -n no-preload-302253
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-302253 -n no-preload-302253: exit status 2 (328.273923ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-302253 -n no-preload-302253
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-302253 -n no-preload-302253: exit status 2 (321.447431ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-302253 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-302253 -n no-preload-302253
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-302253 -n no-preload-302253
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-735015 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0815 01:42:31.883089 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:42:48.814544 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:42:52.008511 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/old-k8s-version-643844/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-735015 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (51.495955944s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-735015 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b4d2e642-838a-47bf-b525-9cbb2bca3b14] Pending
helpers_test.go:344: "busybox" [b4d2e642-838a-47bf-b525-9cbb2bca3b14] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b4d2e642-838a-47bf-b525-9cbb2bca3b14] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.00289223s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-735015 exec busybox -- /bin/sh -c "ulimit -n"
E0815 01:43:06.723650 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/functional-675813/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-735015 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-735015 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.032329947s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-735015 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-735015 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-735015 --alsologtostderr -v=3: (12.026768375s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-735015 -n default-k8s-diff-port-735015
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-735015 -n default-k8s-diff-port-735015: exit status 7 (80.381976ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-735015 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (303.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-735015 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-735015 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (5m3.277544904s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-735015 -n default-k8s-diff-port-735015
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (303.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-qjgv7" [5fc9b200-e68d-4b3e-97ec-0f3a65609809] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003995295s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-qjgv7" [5fc9b200-e68d-4b3e-97ec-0f3a65609809] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004623943s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-681757 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-681757 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-681757 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-681757 -n embed-certs-681757
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-681757 -n embed-certs-681757: exit status 2 (456.195466ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-681757 -n embed-certs-681757
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-681757 -n embed-certs-681757: exit status 2 (338.033997ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-681757 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-681757 -n embed-certs-681757
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-681757 -n embed-certs-681757
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-175406 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0815 01:45:08.147572 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/old-k8s-version-643844/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-175406 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (35.995986164s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-175406 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-175406 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.319330325s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-175406 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-175406 --alsologtostderr -v=3: (1.261969137s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-175406 -n newest-cni-175406
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-175406 -n newest-cni-175406: exit status 7 (70.961077ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-175406 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-175406 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0815 01:45:35.850869 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/old-k8s-version-643844/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-175406 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (15.348798297s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-175406 -n newest-cni-175406
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-175406 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-175406 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-175406 -n newest-cni-175406
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-175406 -n newest-cni-175406: exit status 2 (305.793ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-175406 -n newest-cni-175406
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-175406 -n newest-cni-175406: exit status 2 (319.38805ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-175406 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-175406 -n newest-cni-175406
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-175406 -n newest-cni-175406
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (52.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-973436 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-973436 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (52.519480556s)
--- PASS: TestNetworkPlugins/group/auto/Start (52.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-973436 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-973436 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xnkhq" [7be19ec6-8daf-42cc-9821-636fb001b0ac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xnkhq" [7be19ec6-8daf-42cc-9821-636fb001b0ac] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.003347928s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-973436 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-973436 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-973436 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (52.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-973436 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0815 01:47:18.684181 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/no-preload-302253/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:47:39.166182 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/no-preload-302253/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:47:48.814456 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:47:49.798450 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/functional-675813/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:48:06.724330 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/functional-675813/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-973436 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (52.579278558s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (52.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-w68fd" [a9e2dad9-47de-4fb7-8672-6fdfa80debfc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.008589099s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-973436 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-973436 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-znm5r" [661b6569-03af-44d3-97e3-e7549d54c140] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0815 01:48:20.127915 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/no-preload-302253/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-znm5r" [661b6569-03af-44d3-97e3-e7549d54c140] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004472704s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5p7jw" [f11f1da4-3c42-48d5-a809-ac859818de84] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003277734s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-973436 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-973436 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-973436 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5p7jw" [f11f1da4-3c42-48d5-a809-ac859818de84] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003734594s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-735015 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-735015 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-735015 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-735015 --alsologtostderr -v=1: (1.158403254s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-735015 -n default-k8s-diff-port-735015
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-735015 -n default-k8s-diff-port-735015: exit status 2 (400.963174ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-735015 -n default-k8s-diff-port-735015
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-735015 -n default-k8s-diff-port-735015: exit status 2 (439.653438ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-735015 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-735015 --alsologtostderr -v=1: (1.089195843s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-735015 -n default-k8s-diff-port-735015
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-735015 -n default-k8s-diff-port-735015
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.31s)
E0815 01:52:48.814276 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/addons-177998/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:52:57.586288 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/default-k8s-diff-port-735015/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:52:57.592844 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/default-k8s-diff-port-735015/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:52:57.604345 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/default-k8s-diff-port-735015/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:52:57.625764 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/default-k8s-diff-port-735015/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:52:57.667319 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/default-k8s-diff-port-735015/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:52:57.748731 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/default-k8s-diff-port-735015/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:52:57.910325 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/default-k8s-diff-port-735015/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:52:58.232580 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/default-k8s-diff-port-735015/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:52:58.874666 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/default-k8s-diff-port-735015/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:00.158638 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/default-k8s-diff-port-735015/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:02.720570 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/default-k8s-diff-port-735015/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:05.558540 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/auto-973436/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:06.723927 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/functional-675813/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:07.842893 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/default-k8s-diff-port-735015/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:09.010933 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/kindnet-973436/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:09.017506 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/kindnet-973436/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:09.029034 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/kindnet-973436/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:09.050522 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/kindnet-973436/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:09.091959 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/kindnet-973436/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:09.173986 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/kindnet-973436/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:09.335550 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/kindnet-973436/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:09.657306 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/kindnet-973436/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:10.299378 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/kindnet-973436/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:11.580977 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/kindnet-973436/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:14.143080 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/kindnet-973436/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:18.085145 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/default-k8s-diff-port-735015/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:19.264983 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/kindnet-973436/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:29.506349 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/kindnet-973436/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (73.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-973436 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-973436 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m13.980808542s)
--- PASS: TestNetworkPlugins/group/calico/Start (73.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-973436 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0815 01:49:42.049950 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/no-preload-302253/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-973436 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (59.525745313s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-973436 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-973436 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-d9bt7" [68155c8b-fcc1-4435-a3be-edf1f563d93b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-d9bt7" [68155c8b-fcc1-4435-a3be-edf1f563d93b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004029359s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-k6k27" [2aed71a6-0bb4-4897-927a-71cbe81a0414] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007842987s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-973436 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-973436 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-973436 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-973436 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-973436 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fglc7" [8990a9d0-9d20-458d-bc80-d098db57bce6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0815 01:50:08.147284 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/old-k8s-version-643844/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-fglc7" [8990a9d0-9d20-458d-bc80-d098db57bce6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004802173s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-973436 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-973436 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-973436 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (78.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-973436 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-973436 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m18.175579264s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (78.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-973436 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-973436 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m1.949883071s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-j79c7" [d48f63b7-f624-4620-946f-7dea27938bf4] Running
E0815 01:51:43.619847 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/auto-973436/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:51:43.626278 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/auto-973436/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:51:43.638032 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/auto-973436/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:51:43.659412 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/auto-973436/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:51:43.700840 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/auto-973436/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:51:43.782968 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/auto-973436/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004433687s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-973436 "pgrep -a kubelet"
E0815 01:51:43.944893 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/auto-973436/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-973436 replace --force -f testdata/netcat-deployment.yaml
E0815 01:51:44.266428 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/auto-973436/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-w7wfm" [90b9cf52-ab1f-4632-ae9b-094e052bcfd0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0815 01:51:44.907906 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/auto-973436/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:51:46.189739 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/auto-973436/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:51:48.751997 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/auto-973436/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-w7wfm" [90b9cf52-ab1f-4632-ae9b-094e052bcfd0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003469983s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-973436 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-973436 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pnbd9" [91ed4a26-b166-4dd3-8e5f-da20e62ffbf4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0815 01:51:53.874082 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/auto-973436/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-pnbd9" [91ed4a26-b166-4dd3-8e5f-da20e62ffbf4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003758733s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-973436 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-973436 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-973436 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-973436 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-973436 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-973436 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (72.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-973436 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-973436 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m12.838386565s)
--- PASS: TestNetworkPlugins/group/bridge/Start (72.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-973436 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-973436 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zngbw" [a4f5bc31-3cd4-4a29-8669-6bc1cb0ff3be] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zngbw" [a4f5bc31-3cd4-4a29-8669-6bc1cb0ff3be] Running
E0815 01:53:38.566461 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/default-k8s-diff-port-735015/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003474292s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-973436 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-973436 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-973436 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (30/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.55s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-283129 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-283129" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-283129
--- SKIP: TestDownloadOnlyKic (0.55s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-624057" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-624057
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
E0815 01:31:09.794595 1404298 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-1398913/.minikube/profiles/functional-675813/client.crt: no such file or directory" logger="UnhandledError"
panic.go:626: 
----------------------- debugLogs start: kubenet-973436 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-973436

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-973436

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-973436

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-973436

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-973436

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-973436

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-973436

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-973436

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-973436

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-973436

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-973436

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-973436" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-973436" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-973436

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973436"

                                                
                                                
----------------------- debugLogs end: kubenet-973436 [took: 4.168344932s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-973436" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-973436
--- SKIP: TestNetworkPlugins/group/kubenet (4.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-973436 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-973436

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-973436

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-973436

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-973436

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-973436

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-973436

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-973436

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-973436

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-973436

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-973436

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-973436

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-973436" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-973436

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-973436

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-973436

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-973436

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-973436" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-973436" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-973436

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-973436" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973436"

                                                
                                                
----------------------- debugLogs end: cilium-973436 [took: 4.669381656s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-973436" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-973436
--- SKIP: TestNetworkPlugins/group/cilium (4.96s)

                                                
                                    
Copied to clipboard