Test Report: Docker_Linux_crio_arm64 18350

                    
                      b07500d1f25ef3b9b4cf5a8c10c74b3642cd60ca:2024-03-11:33512
                    
                

Test fail (2/335)

Order failed test Duration
39 TestAddons/parallel/Ingress 169.67
182 TestMutliControlPlane/serial/RestartCluster 127.32
x
+
TestAddons/parallel/Ingress (169.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-127043 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-127043 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-127043 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [3d6d9643-0f5c-4f89-a13a-59d2c4580cad] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [3d6d9643-0f5c-4f89-a13a-59d2c4580cad] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003700642s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-127043 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-127043 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.339990818s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-127043 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-127043 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.068800708s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-127043 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p addons-127043 addons disable ingress-dns --alsologtostderr -v=1: (1.359785259s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-127043 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-127043 addons disable ingress --alsologtostderr -v=1: (7.745524079s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-127043
helpers_test.go:235: (dbg) docker inspect addons-127043:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4ffca9c2f1f1484c3a5fb060c82ae7ba5bfd67fbcc26a3905702354d7468f079",
	        "Created": "2024-03-11T12:54:18.851028833Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1131176,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-11T12:54:19.176112751Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4a9b65157dd7fb2ddb7cb7afe975b3dc288e9877c60d13613a69dd41a70e2e4e",
	        "ResolvConfPath": "/var/lib/docker/containers/4ffca9c2f1f1484c3a5fb060c82ae7ba5bfd67fbcc26a3905702354d7468f079/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4ffca9c2f1f1484c3a5fb060c82ae7ba5bfd67fbcc26a3905702354d7468f079/hostname",
	        "HostsPath": "/var/lib/docker/containers/4ffca9c2f1f1484c3a5fb060c82ae7ba5bfd67fbcc26a3905702354d7468f079/hosts",
	        "LogPath": "/var/lib/docker/containers/4ffca9c2f1f1484c3a5fb060c82ae7ba5bfd67fbcc26a3905702354d7468f079/4ffca9c2f1f1484c3a5fb060c82ae7ba5bfd67fbcc26a3905702354d7468f079-json.log",
	        "Name": "/addons-127043",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-127043:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-127043",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9b14f85fed2cbeda4498e4527bdacd232d1a5f3ff1cfe9c343e1534134efa30d-init/diff:/var/lib/docker/overlay2/4693be53430773dee06d553d71389b6111264113687a037a5053dad5bf06b450/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9b14f85fed2cbeda4498e4527bdacd232d1a5f3ff1cfe9c343e1534134efa30d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9b14f85fed2cbeda4498e4527bdacd232d1a5f3ff1cfe9c343e1534134efa30d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9b14f85fed2cbeda4498e4527bdacd232d1a5f3ff1cfe9c343e1534134efa30d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-127043",
	                "Source": "/var/lib/docker/volumes/addons-127043/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-127043",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-127043",
	                "name.minikube.sigs.k8s.io": "addons-127043",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0bc6db061500a8b5018f9476272660c80c7af6c500d7fb73f9a0d45968ce7784",
	            "SandboxKey": "/var/run/docker/netns/0bc6db061500",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33932"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33931"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33928"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33930"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33929"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-127043": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "4ffca9c2f1f1",
	                        "addons-127043"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "2cc39bc369f56937bb9c669ebf8660106e958854393c290de5bfe0541df18486",
	                    "EndpointID": "a0b9974c3a1b32d3680bb1fb1035a3792fec4b9668f1af45d086b1bc9926bada",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-127043",
	                        "4ffca9c2f1f1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-127043 -n addons-127043
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-127043 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-127043 logs -n 25: (1.884075541s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-846348                                                                     | download-only-846348   | jenkins | v1.32.0 | 11 Mar 24 12:53 UTC | 11 Mar 24 12:53 UTC |
	| delete  | -p download-only-545020                                                                     | download-only-545020   | jenkins | v1.32.0 | 11 Mar 24 12:53 UTC | 11 Mar 24 12:53 UTC |
	| delete  | -p download-only-842375                                                                     | download-only-842375   | jenkins | v1.32.0 | 11 Mar 24 12:53 UTC | 11 Mar 24 12:53 UTC |
	| start   | --download-only -p                                                                          | download-docker-083833 | jenkins | v1.32.0 | 11 Mar 24 12:53 UTC |                     |
	|         | download-docker-083833                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-083833                                                                   | download-docker-083833 | jenkins | v1.32.0 | 11 Mar 24 12:53 UTC | 11 Mar 24 12:53 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-587576   | jenkins | v1.32.0 | 11 Mar 24 12:53 UTC |                     |
	|         | binary-mirror-587576                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:40125                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-587576                                                                     | binary-mirror-587576   | jenkins | v1.32.0 | 11 Mar 24 12:53 UTC | 11 Mar 24 12:53 UTC |
	| addons  | disable dashboard -p                                                                        | addons-127043          | jenkins | v1.32.0 | 11 Mar 24 12:53 UTC |                     |
	|         | addons-127043                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-127043          | jenkins | v1.32.0 | 11 Mar 24 12:53 UTC |                     |
	|         | addons-127043                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-127043 --wait=true                                                                | addons-127043          | jenkins | v1.32.0 | 11 Mar 24 12:53 UTC | 11 Mar 24 12:56 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-127043 ip                                                                            | addons-127043          | jenkins | v1.32.0 | 11 Mar 24 12:56 UTC | 11 Mar 24 12:56 UTC |
	| addons  | addons-127043 addons disable                                                                | addons-127043          | jenkins | v1.32.0 | 11 Mar 24 12:56 UTC | 11 Mar 24 12:56 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-127043          | jenkins | v1.32.0 | 11 Mar 24 12:57 UTC | 11 Mar 24 12:57 UTC |
	|         | -p addons-127043                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-127043 ssh cat                                                                       | addons-127043          | jenkins | v1.32.0 | 11 Mar 24 12:57 UTC | 11 Mar 24 12:57 UTC |
	|         | /opt/local-path-provisioner/pvc-99a0ddad-c9ee-45a7-ab15-bb7b3e522400_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-127043 addons disable                                                                | addons-127043          | jenkins | v1.32.0 | 11 Mar 24 12:57 UTC | 11 Mar 24 12:58 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-127043 addons                                                                        | addons-127043          | jenkins | v1.32.0 | 11 Mar 24 12:57 UTC | 11 Mar 24 12:57 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-127043 addons                                                                        | addons-127043          | jenkins | v1.32.0 | 11 Mar 24 12:57 UTC | 11 Mar 24 12:57 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-127043          | jenkins | v1.32.0 | 11 Mar 24 12:57 UTC | 11 Mar 24 12:57 UTC |
	|         | addons-127043                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-127043          | jenkins | v1.32.0 | 11 Mar 24 12:57 UTC | 11 Mar 24 12:57 UTC |
	|         | -p addons-127043                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-127043 addons                                                                        | addons-127043          | jenkins | v1.32.0 | 11 Mar 24 12:58 UTC | 11 Mar 24 12:58 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-127043          | jenkins | v1.32.0 | 11 Mar 24 12:58 UTC | 11 Mar 24 12:58 UTC |
	|         | addons-127043                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-127043 ssh curl -s                                                                   | addons-127043          | jenkins | v1.32.0 | 11 Mar 24 12:58 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-127043 ip                                                                            | addons-127043          | jenkins | v1.32.0 | 11 Mar 24 13:00 UTC | 11 Mar 24 13:00 UTC |
	| addons  | addons-127043 addons disable                                                                | addons-127043          | jenkins | v1.32.0 | 11 Mar 24 13:00 UTC | 11 Mar 24 13:00 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-127043 addons disable                                                                | addons-127043          | jenkins | v1.32.0 | 11 Mar 24 13:00 UTC | 11 Mar 24 13:00 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 12:53:55
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 12:53:55.286429 1130717 out.go:291] Setting OutFile to fd 1 ...
	I0311 12:53:55.286614 1130717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 12:53:55.286625 1130717 out.go:304] Setting ErrFile to fd 2...
	I0311 12:53:55.286630 1130717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 12:53:55.286876 1130717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-1124504/.minikube/bin
	I0311 12:53:55.287334 1130717 out.go:298] Setting JSON to false
	I0311 12:53:55.288202 1130717 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16583,"bootTime":1710145053,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0311 12:53:55.288274 1130717 start.go:139] virtualization:  
	I0311 12:53:55.290704 1130717 out.go:177] * [addons-127043] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0311 12:53:55.292955 1130717 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 12:53:55.294998 1130717 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 12:53:55.293037 1130717 notify.go:220] Checking for updates...
	I0311 12:53:55.298857 1130717 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18350-1124504/kubeconfig
	I0311 12:53:55.300866 1130717 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-1124504/.minikube
	I0311 12:53:55.302806 1130717 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0311 12:53:55.304731 1130717 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 12:53:55.306760 1130717 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 12:53:55.332168 1130717 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0311 12:53:55.332293 1130717 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 12:53:55.400506 1130717 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-11 12:53:55.39088706 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 12:53:55.400625 1130717 docker.go:295] overlay module found
	I0311 12:53:55.402866 1130717 out.go:177] * Using the docker driver based on user configuration
	I0311 12:53:55.404882 1130717 start.go:297] selected driver: docker
	I0311 12:53:55.404898 1130717 start.go:901] validating driver "docker" against <nil>
	I0311 12:53:55.404911 1130717 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 12:53:55.405639 1130717 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 12:53:55.471591 1130717 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-11 12:53:55.462790123 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 12:53:55.471760 1130717 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 12:53:55.472000 1130717 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 12:53:55.473881 1130717 out.go:177] * Using Docker driver with root privileges
	I0311 12:53:55.475868 1130717 cni.go:84] Creating CNI manager for ""
	I0311 12:53:55.475894 1130717 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0311 12:53:55.475906 1130717 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0311 12:53:55.475986 1130717 start.go:340] cluster config:
	{Name:addons-127043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-127043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 12:53:55.478183 1130717 out.go:177] * Starting "addons-127043" primary control-plane node in "addons-127043" cluster
	I0311 12:53:55.480060 1130717 cache.go:121] Beginning downloading kic base image for docker with crio
	I0311 12:53:55.482107 1130717 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0311 12:53:55.483939 1130717 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0311 12:53:55.484101 1130717 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 12:53:55.484134 1130717 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18350-1124504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0311 12:53:55.484146 1130717 cache.go:56] Caching tarball of preloaded images
	I0311 12:53:55.484220 1130717 preload.go:173] Found /home/jenkins/minikube-integration/18350-1124504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0311 12:53:55.484236 1130717 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0311 12:53:55.484581 1130717 profile.go:142] Saving config to /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/config.json ...
	I0311 12:53:55.484618 1130717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/config.json: {Name:mk18dbf226451d04d4e5a5cadfff66304e9a5b07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:53:55.499367 1130717 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0311 12:53:55.499490 1130717 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0311 12:53:55.499515 1130717 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0311 12:53:55.499524 1130717 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0311 12:53:55.499532 1130717 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0311 12:53:55.499542 1130717 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 from local cache
	I0311 12:54:11.466160 1130717 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 from cached tarball
	I0311 12:54:11.466199 1130717 cache.go:194] Successfully downloaded all kic artifacts
	I0311 12:54:11.466248 1130717 start.go:360] acquireMachinesLock for addons-127043: {Name:mka6a225aeeb5bfc7dd650363219d0b367a02390 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 12:54:11.466757 1130717 start.go:364] duration metric: took 480.349µs to acquireMachinesLock for "addons-127043"
	I0311 12:54:11.466799 1130717 start.go:93] Provisioning new machine with config: &{Name:addons-127043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-127043 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 12:54:11.466893 1130717 start.go:125] createHost starting for "" (driver="docker")
	I0311 12:54:11.469451 1130717 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0311 12:54:11.469701 1130717 start.go:159] libmachine.API.Create for "addons-127043" (driver="docker")
	I0311 12:54:11.469747 1130717 client.go:168] LocalClient.Create starting
	I0311 12:54:11.469887 1130717 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca.pem
	I0311 12:54:11.940173 1130717 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/cert.pem
	I0311 12:54:12.229500 1130717 cli_runner.go:164] Run: docker network inspect addons-127043 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0311 12:54:12.246826 1130717 cli_runner.go:211] docker network inspect addons-127043 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0311 12:54:12.246925 1130717 network_create.go:281] running [docker network inspect addons-127043] to gather additional debugging logs...
	I0311 12:54:12.246947 1130717 cli_runner.go:164] Run: docker network inspect addons-127043
	W0311 12:54:12.262405 1130717 cli_runner.go:211] docker network inspect addons-127043 returned with exit code 1
	I0311 12:54:12.262439 1130717 network_create.go:284] error running [docker network inspect addons-127043]: docker network inspect addons-127043: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-127043 not found
	I0311 12:54:12.262453 1130717 network_create.go:286] output of [docker network inspect addons-127043]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-127043 not found
	
	** /stderr **
	I0311 12:54:12.262556 1130717 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0311 12:54:12.279085 1130717 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000421e10}
	I0311 12:54:12.279134 1130717 network_create.go:124] attempt to create docker network addons-127043 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0311 12:54:12.279196 1130717 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-127043 addons-127043
	I0311 12:54:12.339913 1130717 network_create.go:108] docker network addons-127043 192.168.49.0/24 created
	I0311 12:54:12.339949 1130717 kic.go:121] calculated static IP "192.168.49.2" for the "addons-127043" container
	I0311 12:54:12.340026 1130717 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0311 12:54:12.354378 1130717 cli_runner.go:164] Run: docker volume create addons-127043 --label name.minikube.sigs.k8s.io=addons-127043 --label created_by.minikube.sigs.k8s.io=true
	I0311 12:54:12.370474 1130717 oci.go:103] Successfully created a docker volume addons-127043
	I0311 12:54:12.370572 1130717 cli_runner.go:164] Run: docker run --rm --name addons-127043-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-127043 --entrypoint /usr/bin/test -v addons-127043:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0311 12:54:14.539220 1130717 cli_runner.go:217] Completed: docker run --rm --name addons-127043-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-127043 --entrypoint /usr/bin/test -v addons-127043:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib: (2.16860092s)
	I0311 12:54:14.539252 1130717 oci.go:107] Successfully prepared a docker volume addons-127043
	I0311 12:54:14.539280 1130717 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 12:54:14.539299 1130717 kic.go:194] Starting extracting preloaded images to volume ...
	I0311 12:54:14.539381 1130717 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18350-1124504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-127043:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0311 12:54:18.778969 1130717 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18350-1124504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-127043:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir: (4.239528512s)
	I0311 12:54:18.779009 1130717 kic.go:203] duration metric: took 4.239706542s to extract preloaded images to volume ...
	W0311 12:54:18.779139 1130717 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0311 12:54:18.779251 1130717 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0311 12:54:18.831164 1130717 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-127043 --name addons-127043 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-127043 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-127043 --network addons-127043 --ip 192.168.49.2 --volume addons-127043:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08
	I0311 12:54:19.184420 1130717 cli_runner.go:164] Run: docker container inspect addons-127043 --format={{.State.Running}}
	I0311 12:54:19.213827 1130717 cli_runner.go:164] Run: docker container inspect addons-127043 --format={{.State.Status}}
	I0311 12:54:19.244019 1130717 cli_runner.go:164] Run: docker exec addons-127043 stat /var/lib/dpkg/alternatives/iptables
	I0311 12:54:19.314319 1130717 oci.go:144] the created container "addons-127043" has a running status.
	I0311 12:54:19.314347 1130717 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18350-1124504/.minikube/machines/addons-127043/id_rsa...
	I0311 12:54:20.168512 1130717 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18350-1124504/.minikube/machines/addons-127043/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0311 12:54:20.190672 1130717 cli_runner.go:164] Run: docker container inspect addons-127043 --format={{.State.Status}}
	I0311 12:54:20.210747 1130717 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0311 12:54:20.210766 1130717 kic_runner.go:114] Args: [docker exec --privileged addons-127043 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0311 12:54:20.265955 1130717 cli_runner.go:164] Run: docker container inspect addons-127043 --format={{.State.Status}}
	I0311 12:54:20.290225 1130717 machine.go:94] provisionDockerMachine start ...
	I0311 12:54:20.290322 1130717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-127043
	I0311 12:54:20.305154 1130717 main.go:141] libmachine: Using SSH client type: native
	I0311 12:54:20.305575 1130717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 33932 <nil> <nil>}
	I0311 12:54:20.305591 1130717 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 12:54:20.433073 1130717 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-127043
	
	I0311 12:54:20.433109 1130717 ubuntu.go:169] provisioning hostname "addons-127043"
	I0311 12:54:20.433230 1130717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-127043
	I0311 12:54:20.452116 1130717 main.go:141] libmachine: Using SSH client type: native
	I0311 12:54:20.452368 1130717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 33932 <nil> <nil>}
	I0311 12:54:20.452379 1130717 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-127043 && echo "addons-127043" | sudo tee /etc/hostname
	I0311 12:54:20.592916 1130717 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-127043
	
	I0311 12:54:20.593094 1130717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-127043
	I0311 12:54:20.610104 1130717 main.go:141] libmachine: Using SSH client type: native
	I0311 12:54:20.610362 1130717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 33932 <nil> <nil>}
	I0311 12:54:20.610378 1130717 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-127043' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-127043/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-127043' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 12:54:20.742016 1130717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 12:54:20.742044 1130717 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18350-1124504/.minikube CaCertPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18350-1124504/.minikube}
	I0311 12:54:20.742084 1130717 ubuntu.go:177] setting up certificates
	I0311 12:54:20.742094 1130717 provision.go:84] configureAuth start
	I0311 12:54:20.742161 1130717 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-127043
	I0311 12:54:20.763000 1130717 provision.go:143] copyHostCerts
	I0311 12:54:20.763096 1130717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18350-1124504/.minikube/cert.pem (1123 bytes)
	I0311 12:54:20.763213 1130717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18350-1124504/.minikube/key.pem (1675 bytes)
	I0311 12:54:20.763277 1130717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.pem (1078 bytes)
	I0311 12:54:20.763335 1130717 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca-key.pem org=jenkins.addons-127043 san=[127.0.0.1 192.168.49.2 addons-127043 localhost minikube]
	I0311 12:54:20.946457 1130717 provision.go:177] copyRemoteCerts
	I0311 12:54:20.946526 1130717 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 12:54:20.946566 1130717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-127043
	I0311 12:54:20.963555 1130717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/addons-127043/id_rsa Username:docker}
	I0311 12:54:21.058574 1130717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0311 12:54:21.084137 1130717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0311 12:54:21.109247 1130717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0311 12:54:21.135457 1130717 provision.go:87] duration metric: took 393.336258ms to configureAuth
	I0311 12:54:21.135488 1130717 ubuntu.go:193] setting minikube options for container-runtime
	I0311 12:54:21.135707 1130717 config.go:182] Loaded profile config "addons-127043": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 12:54:21.135820 1130717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-127043
	I0311 12:54:21.155650 1130717 main.go:141] libmachine: Using SSH client type: native
	I0311 12:54:21.155906 1130717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 33932 <nil> <nil>}
	I0311 12:54:21.155928 1130717 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 12:54:21.382699 1130717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 12:54:21.382726 1130717 machine.go:97] duration metric: took 1.092479338s to provisionDockerMachine
	I0311 12:54:21.382737 1130717 client.go:171] duration metric: took 9.91298035s to LocalClient.Create
	I0311 12:54:21.382750 1130717 start.go:167] duration metric: took 9.913050166s to libmachine.API.Create "addons-127043"
	I0311 12:54:21.382758 1130717 start.go:293] postStartSetup for "addons-127043" (driver="docker")
	I0311 12:54:21.382769 1130717 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 12:54:21.382835 1130717 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 12:54:21.382883 1130717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-127043
	I0311 12:54:21.401082 1130717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/addons-127043/id_rsa Username:docker}
	I0311 12:54:21.500646 1130717 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 12:54:21.504330 1130717 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0311 12:54:21.504377 1130717 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0311 12:54:21.504389 1130717 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0311 12:54:21.504397 1130717 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0311 12:54:21.504407 1130717 filesync.go:126] Scanning /home/jenkins/minikube-integration/18350-1124504/.minikube/addons for local assets ...
	I0311 12:54:21.504499 1130717 filesync.go:126] Scanning /home/jenkins/minikube-integration/18350-1124504/.minikube/files for local assets ...
	I0311 12:54:21.504525 1130717 start.go:296] duration metric: took 121.761359ms for postStartSetup
	I0311 12:54:21.504849 1130717 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-127043
	I0311 12:54:21.520418 1130717 profile.go:142] Saving config to /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/config.json ...
	I0311 12:54:21.520708 1130717 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 12:54:21.520767 1130717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-127043
	I0311 12:54:21.536609 1130717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/addons-127043/id_rsa Username:docker}
	I0311 12:54:21.626083 1130717 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0311 12:54:21.630380 1130717 start.go:128] duration metric: took 10.163462856s to createHost
	I0311 12:54:21.630410 1130717 start.go:83] releasing machines lock for "addons-127043", held for 10.163633238s
	I0311 12:54:21.630481 1130717 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-127043
	I0311 12:54:21.645184 1130717 ssh_runner.go:195] Run: cat /version.json
	I0311 12:54:21.645239 1130717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-127043
	I0311 12:54:21.645483 1130717 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 12:54:21.645546 1130717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-127043
	I0311 12:54:21.666806 1130717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/addons-127043/id_rsa Username:docker}
	I0311 12:54:21.670203 1130717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/addons-127043/id_rsa Username:docker}
	I0311 12:54:21.867774 1130717 ssh_runner.go:195] Run: systemctl --version
	I0311 12:54:21.872121 1130717 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 12:54:22.012282 1130717 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0311 12:54:22.016962 1130717 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 12:54:22.039437 1130717 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0311 12:54:22.039562 1130717 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 12:54:22.075913 1130717 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0311 12:54:22.075950 1130717 start.go:494] detecting cgroup driver to use...
	I0311 12:54:22.075984 1130717 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0311 12:54:22.076048 1130717 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 12:54:22.092768 1130717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 12:54:22.104585 1130717 docker.go:217] disabling cri-docker service (if available) ...
	I0311 12:54:22.104662 1130717 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 12:54:22.120468 1130717 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 12:54:22.135051 1130717 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 12:54:22.231839 1130717 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 12:54:22.334403 1130717 docker.go:233] disabling docker service ...
	I0311 12:54:22.334516 1130717 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 12:54:22.355080 1130717 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 12:54:22.367223 1130717 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 12:54:22.454772 1130717 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 12:54:22.544347 1130717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 12:54:22.555628 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 12:54:22.574656 1130717 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 12:54:22.574729 1130717 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 12:54:22.586056 1130717 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 12:54:22.586127 1130717 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 12:54:22.597025 1130717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 12:54:22.607595 1130717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 12:54:22.618340 1130717 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 12:54:22.628265 1130717 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 12:54:22.637109 1130717 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 12:54:22.645777 1130717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 12:54:22.732337 1130717 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 12:54:22.843461 1130717 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 12:54:22.843566 1130717 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 12:54:22.847240 1130717 start.go:562] Will wait 60s for crictl version
	I0311 12:54:22.847342 1130717 ssh_runner.go:195] Run: which crictl
	I0311 12:54:22.850902 1130717 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 12:54:22.894525 1130717 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0311 12:54:22.894640 1130717 ssh_runner.go:195] Run: crio --version
	I0311 12:54:22.933386 1130717 ssh_runner.go:195] Run: crio --version
	I0311 12:54:22.972209 1130717 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0311 12:54:22.974260 1130717 cli_runner.go:164] Run: docker network inspect addons-127043 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0311 12:54:22.989079 1130717 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0311 12:54:22.992689 1130717 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 12:54:23.005738 1130717 kubeadm.go:877] updating cluster {Name:addons-127043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-127043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 12:54:23.005872 1130717 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 12:54:23.005938 1130717 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 12:54:23.076409 1130717 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 12:54:23.076437 1130717 crio.go:415] Images already preloaded, skipping extraction
	I0311 12:54:23.076493 1130717 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 12:54:23.119353 1130717 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 12:54:23.119378 1130717 cache_images.go:84] Images are preloaded, skipping loading
	I0311 12:54:23.119387 1130717 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.28.4 crio true true} ...
	I0311 12:54:23.119486 1130717 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-127043 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-127043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 12:54:23.119578 1130717 ssh_runner.go:195] Run: crio config
	I0311 12:54:23.186647 1130717 cni.go:84] Creating CNI manager for ""
	I0311 12:54:23.186670 1130717 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0311 12:54:23.186681 1130717 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 12:54:23.186732 1130717 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-127043 NodeName:addons-127043 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 12:54:23.186896 1130717 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-127043"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 12:54:23.186975 1130717 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 12:54:23.195813 1130717 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 12:54:23.195882 1130717 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 12:54:23.204701 1130717 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0311 12:54:23.222600 1130717 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 12:54:23.240901 1130717 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0311 12:54:23.259127 1130717 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0311 12:54:23.262530 1130717 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 12:54:23.273888 1130717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 12:54:23.370922 1130717 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 12:54:23.384575 1130717 certs.go:68] Setting up /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043 for IP: 192.168.49.2
	I0311 12:54:23.384652 1130717 certs.go:194] generating shared ca certs ...
	I0311 12:54:23.384683 1130717 certs.go:226] acquiring lock for ca certs: {Name:mk30659f158a045ae3a6809b62fbd61891660c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:54:23.384895 1130717 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.key
	I0311 12:54:23.552470 1130717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.crt ...
	I0311 12:54:23.552510 1130717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.crt: {Name:mk3de99cc5b2ec94fbc45e834b4e9d48bb6ccc33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:54:23.553241 1130717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.key ...
	I0311 12:54:23.553258 1130717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.key: {Name:mk55c0d917fc57e236545476b53eaf235cd18cc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:54:23.553401 1130717 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/proxy-client-ca.key
	I0311 12:54:24.010127 1130717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18350-1124504/.minikube/proxy-client-ca.crt ...
	I0311 12:54:24.010166 1130717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-1124504/.minikube/proxy-client-ca.crt: {Name:mkdf449e36cade89ffe653f5163f43ee4559f713 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:54:24.010423 1130717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18350-1124504/.minikube/proxy-client-ca.key ...
	I0311 12:54:24.010440 1130717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-1124504/.minikube/proxy-client-ca.key: {Name:mk74d0a2e774fee7894aec435e15b1fe9d6be0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:54:24.010561 1130717 certs.go:256] generating profile certs ...
	I0311 12:54:24.010640 1130717 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.key
	I0311 12:54:24.010658 1130717 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt with IP's: []
	I0311 12:54:24.646525 1130717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt ...
	I0311 12:54:24.646556 1130717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: {Name:mkb2a37a0fb5746895eb5aa98a0b43013c420a1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:54:24.646747 1130717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.key ...
	I0311 12:54:24.646761 1130717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.key: {Name:mk35bacbfb5463b539191dbe9abb8fce1f09e8b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:54:24.646847 1130717 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/apiserver.key.12d8e911
	I0311 12:54:24.646869 1130717 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/apiserver.crt.12d8e911 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0311 12:54:25.215496 1130717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/apiserver.crt.12d8e911 ...
	I0311 12:54:25.215535 1130717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/apiserver.crt.12d8e911: {Name:mk97a5916793e4a41039aefcc50db22fc8e08fe8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:54:25.215733 1130717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/apiserver.key.12d8e911 ...
	I0311 12:54:25.215749 1130717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/apiserver.key.12d8e911: {Name:mk5ee08f90ef3dbf054561c6de2f4b75d845ce30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:54:25.215837 1130717 certs.go:381] copying /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/apiserver.crt.12d8e911 -> /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/apiserver.crt
	I0311 12:54:25.215917 1130717 certs.go:385] copying /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/apiserver.key.12d8e911 -> /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/apiserver.key
	I0311 12:54:25.215973 1130717 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/proxy-client.key
	I0311 12:54:25.215993 1130717 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/proxy-client.crt with IP's: []
	I0311 12:54:25.594244 1130717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/proxy-client.crt ...
	I0311 12:54:25.594276 1130717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/proxy-client.crt: {Name:mke748256c5910104ab53c2625106cdccfba9cf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:54:25.594474 1130717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/proxy-client.key ...
	I0311 12:54:25.594491 1130717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/proxy-client.key: {Name:mk07a38d12e38a175f96fdd55aafc5070ad1a937 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:54:25.594691 1130717 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 12:54:25.594735 1130717 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca.pem (1078 bytes)
	I0311 12:54:25.594765 1130717 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/cert.pem (1123 bytes)
	I0311 12:54:25.594791 1130717 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/key.pem (1675 bytes)
	I0311 12:54:25.595349 1130717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 12:54:25.621214 1130717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 12:54:25.645595 1130717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 12:54:25.668858 1130717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 12:54:25.697219 1130717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0311 12:54:25.728624 1130717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0311 12:54:25.759173 1130717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 12:54:25.783250 1130717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0311 12:54:25.807125 1130717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 12:54:25.831082 1130717 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 12:54:25.848548 1130717 ssh_runner.go:195] Run: openssl version
	I0311 12:54:25.854138 1130717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 12:54:25.863477 1130717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 12:54:25.867097 1130717 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 12:54 /usr/share/ca-certificates/minikubeCA.pem
	I0311 12:54:25.867182 1130717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 12:54:25.874015 1130717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 12:54:25.883218 1130717 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 12:54:25.886414 1130717 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0311 12:54:25.886512 1130717 kubeadm.go:391] StartCluster: {Name:addons-127043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-127043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 12:54:25.886595 1130717 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 12:54:25.886650 1130717 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 12:54:25.926274 1130717 cri.go:89] found id: ""
	I0311 12:54:25.926396 1130717 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0311 12:54:25.934816 1130717 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 12:54:25.943417 1130717 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0311 12:54:25.943479 1130717 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 12:54:25.952124 1130717 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 12:54:25.952148 1130717 kubeadm.go:156] found existing configuration files:
	
	I0311 12:54:25.952227 1130717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 12:54:25.960801 1130717 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 12:54:25.960887 1130717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 12:54:25.969047 1130717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 12:54:25.977550 1130717 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 12:54:25.977615 1130717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 12:54:25.985913 1130717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 12:54:25.995901 1130717 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 12:54:25.995978 1130717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 12:54:26.007496 1130717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 12:54:26.017050 1130717 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 12:54:26.017117 1130717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 12:54:26.025867 1130717 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0311 12:54:26.068255 1130717 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0311 12:54:26.068465 1130717 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 12:54:26.107569 1130717 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0311 12:54:26.107644 1130717 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1055-aws
	I0311 12:54:26.107681 1130717 kubeadm.go:309] OS: Linux
	I0311 12:54:26.107730 1130717 kubeadm.go:309] CGROUPS_CPU: enabled
	I0311 12:54:26.107780 1130717 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0311 12:54:26.107829 1130717 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0311 12:54:26.107881 1130717 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0311 12:54:26.107930 1130717 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0311 12:54:26.107984 1130717 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0311 12:54:26.108033 1130717 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0311 12:54:26.108083 1130717 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0311 12:54:26.108131 1130717 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0311 12:54:26.180550 1130717 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 12:54:26.180722 1130717 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 12:54:26.180851 1130717 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 12:54:26.409677 1130717 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 12:54:26.413572 1130717 out.go:204]   - Generating certificates and keys ...
	I0311 12:54:26.413739 1130717 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 12:54:26.413831 1130717 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 12:54:26.906774 1130717 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0311 12:54:27.405701 1130717 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0311 12:54:27.634728 1130717 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0311 12:54:27.806769 1130717 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0311 12:54:28.486700 1130717 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0311 12:54:28.486839 1130717 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-127043 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0311 12:54:28.767603 1130717 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0311 12:54:28.767743 1130717 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-127043 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0311 12:54:29.038593 1130717 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0311 12:54:29.432990 1130717 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0311 12:54:30.855183 1130717 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0311 12:54:30.855466 1130717 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 12:54:31.091347 1130717 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 12:54:31.257688 1130717 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 12:54:31.477958 1130717 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 12:54:32.154439 1130717 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 12:54:32.155447 1130717 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 12:54:32.158583 1130717 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 12:54:32.162643 1130717 out.go:204]   - Booting up control plane ...
	I0311 12:54:32.162745 1130717 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 12:54:32.162828 1130717 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 12:54:32.163039 1130717 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 12:54:32.174229 1130717 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 12:54:32.176926 1130717 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 12:54:32.177172 1130717 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 12:54:32.273084 1130717 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 12:54:39.278596 1130717 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.005310 seconds
	I0311 12:54:39.278723 1130717 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0311 12:54:39.296299 1130717 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0311 12:54:39.823110 1130717 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0311 12:54:39.823298 1130717 kubeadm.go:309] [mark-control-plane] Marking the node addons-127043 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0311 12:54:40.335008 1130717 kubeadm.go:309] [bootstrap-token] Using token: dlkphy.d2y8mtgihxt8he1o
	I0311 12:54:40.337379 1130717 out.go:204]   - Configuring RBAC rules ...
	I0311 12:54:40.337506 1130717 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0311 12:54:40.342914 1130717 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0311 12:54:40.350940 1130717 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0311 12:54:40.354993 1130717 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0311 12:54:40.360608 1130717 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0311 12:54:40.365132 1130717 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0311 12:54:40.380592 1130717 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0311 12:54:40.608092 1130717 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0311 12:54:40.750003 1130717 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0311 12:54:40.751130 1130717 kubeadm.go:309] 
	I0311 12:54:40.751214 1130717 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0311 12:54:40.751225 1130717 kubeadm.go:309] 
	I0311 12:54:40.751299 1130717 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0311 12:54:40.751309 1130717 kubeadm.go:309] 
	I0311 12:54:40.751333 1130717 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0311 12:54:40.751394 1130717 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0311 12:54:40.751446 1130717 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0311 12:54:40.751455 1130717 kubeadm.go:309] 
	I0311 12:54:40.751507 1130717 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0311 12:54:40.751517 1130717 kubeadm.go:309] 
	I0311 12:54:40.751563 1130717 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0311 12:54:40.751572 1130717 kubeadm.go:309] 
	I0311 12:54:40.751622 1130717 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0311 12:54:40.751698 1130717 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0311 12:54:40.751765 1130717 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0311 12:54:40.751773 1130717 kubeadm.go:309] 
	I0311 12:54:40.751854 1130717 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0311 12:54:40.751931 1130717 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0311 12:54:40.751939 1130717 kubeadm.go:309] 
	I0311 12:54:40.752019 1130717 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token dlkphy.d2y8mtgihxt8he1o \
	I0311 12:54:40.752122 1130717 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:2a8e91d8f0d2883530b18ebb6913f2e4ec54b5f4373a0076050e22c76634c4dc \
	I0311 12:54:40.752144 1130717 kubeadm.go:309] 	--control-plane 
	I0311 12:54:40.752152 1130717 kubeadm.go:309] 
	I0311 12:54:40.752234 1130717 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0311 12:54:40.752241 1130717 kubeadm.go:309] 
	I0311 12:54:40.752320 1130717 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token dlkphy.d2y8mtgihxt8he1o \
	I0311 12:54:40.753582 1130717 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:2a8e91d8f0d2883530b18ebb6913f2e4ec54b5f4373a0076050e22c76634c4dc 
	I0311 12:54:40.755920 1130717 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1055-aws\n", err: exit status 1
	I0311 12:54:40.756043 1130717 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 12:54:40.756063 1130717 cni.go:84] Creating CNI manager for ""
	I0311 12:54:40.756071 1130717 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0311 12:54:40.758575 1130717 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0311 12:54:40.760443 1130717 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0311 12:54:40.764997 1130717 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0311 12:54:40.765021 1130717 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0311 12:54:40.788002 1130717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0311 12:54:41.689860 1130717 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 12:54:41.689993 1130717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:54:41.690085 1130717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-127043 minikube.k8s.io/updated_at=2024_03_11T12_54_41_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1f02234404c3608d31811fa9c1f2f7d976b3e563 minikube.k8s.io/name=addons-127043 minikube.k8s.io/primary=true
	I0311 12:54:41.703771 1130717 ops.go:34] apiserver oom_adj: -16
	I0311 12:54:41.823745 1130717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:54:42.324807 1130717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:54:42.824351 1130717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:54:43.324678 1130717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:54:43.823900 1130717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:54:44.324728 1130717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:54:44.824302 1130717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:54:45.324627 1130717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:54:45.824069 1130717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:54:46.324424 1130717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:54:46.824847 1130717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:54:47.324787 1130717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:54:47.824215 1130717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:54:48.324548 1130717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:54:48.823876 1130717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:54:49.323813 1130717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:54:49.824468 1130717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:54:50.324836 1130717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:54:50.824160 1130717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:54:51.324518 1130717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:54:51.824282 1130717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:54:52.324679 1130717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:54:52.824054 1130717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:54:53.324844 1130717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:54:53.824656 1130717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:54:54.324407 1130717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 12:54:54.454251 1130717 kubeadm.go:1106] duration metric: took 12.764301931s to wait for elevateKubeSystemPrivileges
	W0311 12:54:54.454296 1130717 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0311 12:54:54.454303 1130717 kubeadm.go:393] duration metric: took 28.567796323s to StartCluster
	I0311 12:54:54.454326 1130717 settings.go:142] acquiring lock: {Name:mk0a76f674884ed0c489dd40a16d57ce9e1cba50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:54:54.454445 1130717 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18350-1124504/kubeconfig
	I0311 12:54:54.454876 1130717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-1124504/kubeconfig: {Name:mk1044b4a136be32fc018b928173d9e5fa18a2ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 12:54:54.455067 1130717 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 12:54:54.457151 1130717 out.go:177] * Verifying Kubernetes components...
	I0311 12:54:54.455197 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0311 12:54:54.455369 1130717 config.go:182] Loaded profile config "addons-127043": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 12:54:54.455377 1130717 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0311 12:54:54.458798 1130717 addons.go:69] Setting yakd=true in profile "addons-127043"
	I0311 12:54:54.458826 1130717 addons.go:234] Setting addon yakd=true in "addons-127043"
	I0311 12:54:54.458861 1130717 host.go:66] Checking if "addons-127043" exists ...
	I0311 12:54:54.459372 1130717 cli_runner.go:164] Run: docker container inspect addons-127043 --format={{.State.Status}}
	I0311 12:54:54.459527 1130717 addons.go:69] Setting ingress=true in profile "addons-127043"
	I0311 12:54:54.459551 1130717 addons.go:234] Setting addon ingress=true in "addons-127043"
	I0311 12:54:54.459591 1130717 host.go:66] Checking if "addons-127043" exists ...
	I0311 12:54:54.459983 1130717 cli_runner.go:164] Run: docker container inspect addons-127043 --format={{.State.Status}}
	I0311 12:54:54.461307 1130717 addons.go:69] Setting ingress-dns=true in profile "addons-127043"
	I0311 12:54:54.461373 1130717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 12:54:54.461410 1130717 addons.go:234] Setting addon ingress-dns=true in "addons-127043"
	I0311 12:54:54.461454 1130717 host.go:66] Checking if "addons-127043" exists ...
	I0311 12:54:54.461691 1130717 addons.go:69] Setting cloud-spanner=true in profile "addons-127043"
	I0311 12:54:54.461712 1130717 addons.go:234] Setting addon cloud-spanner=true in "addons-127043"
	I0311 12:54:54.461731 1130717 host.go:66] Checking if "addons-127043" exists ...
	I0311 12:54:54.461925 1130717 cli_runner.go:164] Run: docker container inspect addons-127043 --format={{.State.Status}}
	I0311 12:54:54.462103 1130717 cli_runner.go:164] Run: docker container inspect addons-127043 --format={{.State.Status}}
	I0311 12:54:54.470593 1130717 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-127043"
	I0311 12:54:54.470719 1130717 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-127043"
	I0311 12:54:54.471144 1130717 host.go:66] Checking if "addons-127043" exists ...
	I0311 12:54:54.472304 1130717 cli_runner.go:164] Run: docker container inspect addons-127043 --format={{.State.Status}}
	I0311 12:54:54.475849 1130717 addons.go:69] Setting default-storageclass=true in profile "addons-127043"
	I0311 12:54:54.475932 1130717 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-127043"
	I0311 12:54:54.476483 1130717 cli_runner.go:164] Run: docker container inspect addons-127043 --format={{.State.Status}}
	I0311 12:54:54.474577 1130717 addons.go:69] Setting inspektor-gadget=true in profile "addons-127043"
	I0311 12:54:54.484364 1130717 addons.go:234] Setting addon inspektor-gadget=true in "addons-127043"
	I0311 12:54:54.484468 1130717 host.go:66] Checking if "addons-127043" exists ...
	I0311 12:54:54.485026 1130717 cli_runner.go:164] Run: docker container inspect addons-127043 --format={{.State.Status}}
	I0311 12:54:54.499047 1130717 addons.go:69] Setting gcp-auth=true in profile "addons-127043"
	I0311 12:54:54.499148 1130717 mustload.go:65] Loading cluster: addons-127043
	I0311 12:54:54.499369 1130717 config.go:182] Loaded profile config "addons-127043": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 12:54:54.499715 1130717 cli_runner.go:164] Run: docker container inspect addons-127043 --format={{.State.Status}}
	I0311 12:54:54.474593 1130717 addons.go:69] Setting metrics-server=true in profile "addons-127043"
	I0311 12:54:54.502531 1130717 addons.go:234] Setting addon metrics-server=true in "addons-127043"
	I0311 12:54:54.502626 1130717 host.go:66] Checking if "addons-127043" exists ...
	I0311 12:54:54.503197 1130717 cli_runner.go:164] Run: docker container inspect addons-127043 --format={{.State.Status}}
	I0311 12:54:54.474598 1130717 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-127043"
	I0311 12:54:54.541622 1130717 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-127043"
	I0311 12:54:54.541692 1130717 host.go:66] Checking if "addons-127043" exists ...
	I0311 12:54:54.542185 1130717 cli_runner.go:164] Run: docker container inspect addons-127043 --format={{.State.Status}}
	I0311 12:54:54.474602 1130717 addons.go:69] Setting registry=true in profile "addons-127043"
	I0311 12:54:54.564636 1130717 addons.go:234] Setting addon registry=true in "addons-127043"
	I0311 12:54:54.564685 1130717 host.go:66] Checking if "addons-127043" exists ...
	I0311 12:54:54.565185 1130717 cli_runner.go:164] Run: docker container inspect addons-127043 --format={{.State.Status}}
	I0311 12:54:54.474607 1130717 addons.go:69] Setting storage-provisioner=true in profile "addons-127043"
	I0311 12:54:54.585667 1130717 addons.go:234] Setting addon storage-provisioner=true in "addons-127043"
	I0311 12:54:54.585717 1130717 host.go:66] Checking if "addons-127043" exists ...
	I0311 12:54:54.586169 1130717 cli_runner.go:164] Run: docker container inspect addons-127043 --format={{.State.Status}}
	I0311 12:54:54.474611 1130717 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-127043"
	I0311 12:54:54.605597 1130717 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-127043"
	I0311 12:54:54.605924 1130717 cli_runner.go:164] Run: docker container inspect addons-127043 --format={{.State.Status}}
	I0311 12:54:54.474621 1130717 addons.go:69] Setting volumesnapshots=true in profile "addons-127043"
	I0311 12:54:54.619040 1130717 addons.go:234] Setting addon volumesnapshots=true in "addons-127043"
	I0311 12:54:54.619088 1130717 host.go:66] Checking if "addons-127043" exists ...
	I0311 12:54:54.619551 1130717 cli_runner.go:164] Run: docker container inspect addons-127043 --format={{.State.Status}}
	I0311 12:54:54.659146 1130717 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0311 12:54:54.695621 1130717 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0311 12:54:54.695644 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0311 12:54:54.695714 1130717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-127043
	I0311 12:54:54.697500 1130717 addons.go:234] Setting addon default-storageclass=true in "addons-127043"
	I0311 12:54:54.697537 1130717 host.go:66] Checking if "addons-127043" exists ...
	I0311 12:54:54.697960 1130717 cli_runner.go:164] Run: docker container inspect addons-127043 --format={{.State.Status}}
	I0311 12:54:54.727430 1130717 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0311 12:54:54.729507 1130717 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0311 12:54:54.729539 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0311 12:54:54.729603 1130717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-127043
	I0311 12:54:54.735211 1130717 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0311 12:54:54.736956 1130717 out.go:177]   - Using image docker.io/registry:2.8.3
	I0311 12:54:54.738848 1130717 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0311 12:54:54.737010 1130717 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0311 12:54:54.737122 1130717 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0311 12:54:54.737127 1130717 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0311 12:54:54.737131 1130717 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0311 12:54:54.737135 1130717 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.25.1
	I0311 12:54:54.742224 1130717 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0311 12:54:54.742247 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0311 12:54:54.742310 1130717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-127043
	I0311 12:54:54.740542 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0311 12:54:54.742600 1130717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-127043
	I0311 12:54:54.740631 1130717 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0311 12:54:54.761833 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0311 12:54:54.761962 1130717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-127043
	I0311 12:54:54.762721 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0311 12:54:54.762781 1130717 host.go:66] Checking if "addons-127043" exists ...
	I0311 12:54:54.764653 1130717 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 12:54:54.770285 1130717 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0311 12:54:54.770306 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0311 12:54:54.770370 1130717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-127043
	I0311 12:54:54.795403 1130717 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0311 12:54:54.800450 1130717 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0311 12:54:54.807187 1130717 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0311 12:54:54.850656 1130717 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0311 12:54:54.850725 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0311 12:54:54.850828 1130717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-127043
	I0311 12:54:54.857698 1130717 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0311 12:54:54.857785 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0311 12:54:54.857887 1130717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-127043
	I0311 12:54:54.874624 1130717 node_ready.go:35] waiting up to 6m0s for node "addons-127043" to be "Ready" ...
	I0311 12:54:54.889405 1130717 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0311 12:54:54.891542 1130717 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0311 12:54:54.897193 1130717 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0311 12:54:54.905732 1130717 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0311 12:54:54.919263 1130717 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0311 12:54:54.942122 1130717 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0311 12:54:54.944379 1130717 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0311 12:54:54.946268 1130717 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0311 12:54:54.946288 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0311 12:54:54.946355 1130717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-127043
	I0311 12:54:54.936664 1130717 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-127043"
	I0311 12:54:54.937039 1130717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/addons-127043/id_rsa Username:docker}
	I0311 12:54:54.937067 1130717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/addons-127043/id_rsa Username:docker}
	I0311 12:54:54.977825 1130717 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0311 12:54:54.981090 1130717 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0311 12:54:54.981116 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0311 12:54:54.981193 1130717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-127043
	I0311 12:54:55.000667 1130717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/addons-127043/id_rsa Username:docker}
	I0311 12:54:55.001628 1130717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/addons-127043/id_rsa Username:docker}
	I0311 12:54:55.003857 1130717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/addons-127043/id_rsa Username:docker}
	I0311 12:54:55.005077 1130717 host.go:66] Checking if "addons-127043" exists ...
	I0311 12:54:55.005766 1130717 cli_runner.go:164] Run: docker container inspect addons-127043 --format={{.State.Status}}
	I0311 12:54:55.065935 1130717 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 12:54:55.059197 1130717 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 12:54:55.065860 1130717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/addons-127043/id_rsa Username:docker}
	I0311 12:54:55.068758 1130717 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 12:54:55.068792 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 12:54:55.068950 1130717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-127043
	I0311 12:54:55.066236 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 12:54:55.069486 1130717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-127043
	I0311 12:54:55.081217 1130717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/addons-127043/id_rsa Username:docker}
	I0311 12:54:55.082018 1130717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/addons-127043/id_rsa Username:docker}
	I0311 12:54:55.128891 1130717 out.go:177]   - Using image docker.io/busybox:stable
	I0311 12:54:55.130759 1130717 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0311 12:54:55.132470 1130717 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0311 12:54:55.132489 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0311 12:54:55.132555 1130717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-127043
	I0311 12:54:55.139252 1130717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/addons-127043/id_rsa Username:docker}
	I0311 12:54:55.171021 1130717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/addons-127043/id_rsa Username:docker}
	I0311 12:54:55.183682 1130717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/addons-127043/id_rsa Username:docker}
	I0311 12:54:55.184227 1130717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/addons-127043/id_rsa Username:docker}
	I0311 12:54:55.209298 1130717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/addons-127043/id_rsa Username:docker}
	I0311 12:54:55.417559 1130717 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0311 12:54:55.417592 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0311 12:54:55.492444 1130717 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0311 12:54:55.492468 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0311 12:54:55.494063 1130717 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0311 12:54:55.494092 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0311 12:54:55.532338 1130717 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0311 12:54:55.532373 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0311 12:54:55.535518 1130717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0311 12:54:55.543184 1130717 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0311 12:54:55.543248 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0311 12:54:55.563593 1130717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0311 12:54:55.597144 1130717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 12:54:55.598296 1130717 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0311 12:54:55.598318 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0311 12:54:55.609111 1130717 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0311 12:54:55.609150 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0311 12:54:55.628241 1130717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 12:54:55.631326 1130717 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0311 12:54:55.631353 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0311 12:54:55.634135 1130717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0311 12:54:55.640920 1130717 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0311 12:54:55.640945 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0311 12:54:55.685685 1130717 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0311 12:54:55.685711 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0311 12:54:55.690590 1130717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0311 12:54:55.702677 1130717 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 12:54:55.702705 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0311 12:54:55.714627 1130717 start.go:948] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0311 12:54:55.721195 1130717 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0311 12:54:55.721228 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0311 12:54:55.737119 1130717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0311 12:54:55.749494 1130717 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0311 12:54:55.749529 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0311 12:54:55.769317 1130717 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0311 12:54:55.769373 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0311 12:54:55.786686 1130717 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0311 12:54:55.786723 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0311 12:54:55.837091 1130717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0311 12:54:55.854535 1130717 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0311 12:54:55.854563 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0311 12:54:55.866860 1130717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 12:54:55.887256 1130717 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0311 12:54:55.887284 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0311 12:54:55.905378 1130717 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0311 12:54:55.905405 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0311 12:54:55.968578 1130717 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0311 12:54:55.968615 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0311 12:54:56.013352 1130717 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0311 12:54:56.013384 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0311 12:54:56.067086 1130717 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0311 12:54:56.067113 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0311 12:54:56.082314 1130717 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0311 12:54:56.082345 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0311 12:54:56.150001 1130717 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0311 12:54:56.150029 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0311 12:54:56.233184 1130717 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0311 12:54:56.233211 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0311 12:54:56.239632 1130717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0311 12:54:56.281824 1130717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0311 12:54:56.334573 1130717 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0311 12:54:56.334600 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0311 12:54:56.369806 1130717 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0311 12:54:56.369832 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0311 12:54:56.496197 1130717 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0311 12:54:56.496224 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0311 12:54:56.524119 1130717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0311 12:54:56.623878 1130717 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0311 12:54:56.623906 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0311 12:54:56.645941 1130717 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0311 12:54:56.645973 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0311 12:54:56.679751 1130717 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0311 12:54:56.679776 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0311 12:54:56.724265 1130717 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-127043" context rescaled to 1 replicas
	I0311 12:54:56.731305 1130717 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0311 12:54:56.731333 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0311 12:54:56.779891 1130717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0311 12:54:57.172720 1130717 node_ready.go:53] node "addons-127043" has status "Ready":"False"
	I0311 12:54:59.706395 1130717 node_ready.go:53] node "addons-127043" has status "Ready":"False"
	I0311 12:54:59.830266 1130717 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.29466073s)
	I0311 12:55:01.585438 1130717 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0311 12:55:01.585540 1130717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-127043
	I0311 12:55:01.612903 1130717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/addons-127043/id_rsa Username:docker}
	I0311 12:55:01.758340 1130717 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0311 12:55:01.803232 1130717 addons.go:234] Setting addon gcp-auth=true in "addons-127043"
	I0311 12:55:01.803286 1130717 host.go:66] Checking if "addons-127043" exists ...
	I0311 12:55:01.803759 1130717 cli_runner.go:164] Run: docker container inspect addons-127043 --format={{.State.Status}}
	I0311 12:55:01.838647 1130717 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0311 12:55:01.838704 1130717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-127043
	I0311 12:55:01.869397 1130717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/addons-127043/id_rsa Username:docker}
	I0311 12:55:01.915690 1130717 node_ready.go:53] node "addons-127043" has status "Ready":"False"
	I0311 12:55:01.944536 1130717 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.380887004s)
	I0311 12:55:01.944574 1130717 addons.go:470] Verifying addon ingress=true in "addons-127043"
	I0311 12:55:01.947221 1130717 out.go:177] * Verifying ingress addon...
	I0311 12:55:01.944780 1130717 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.347594981s)
	I0311 12:55:01.944936 1130717 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.316564163s)
	I0311 12:55:01.944972 1130717 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.310816836s)
	I0311 12:55:01.944992 1130717 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.254381972s)
	I0311 12:55:01.945052 1130717 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.207913268s)
	I0311 12:55:01.945078 1130717 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.107963875s)
	I0311 12:55:01.945150 1130717 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.078252819s)
	I0311 12:55:01.945255 1130717 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.705592757s)
	I0311 12:55:01.945294 1130717 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.663440312s)
	I0311 12:55:01.945435 1130717 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.4212868s)
	I0311 12:55:01.949988 1130717 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0311 12:55:01.950245 1130717 addons.go:470] Verifying addon metrics-server=true in "addons-127043"
	I0311 12:55:01.950312 1130717 addons.go:470] Verifying addon registry=true in "addons-127043"
	I0311 12:55:01.952221 1130717 out.go:177] * Verifying registry addon...
	W0311 12:55:01.950493 1130717 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0311 12:55:01.955847 1130717 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-127043 service yakd-dashboard -n yakd-dashboard
	
	I0311 12:55:01.954106 1130717 retry.go:31] will retry after 366.123773ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0311 12:55:01.955051 1130717 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0311 12:55:01.970706 1130717 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0311 12:55:01.970738 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:01.971255 1130717 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0311 12:55:01.971277 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0311 12:55:01.984669 1130717 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0311 12:55:02.242354 1130717 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.462415003s)
	I0311 12:55:02.242401 1130717 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-127043"
	I0311 12:55:02.244366 1130717 out.go:177] * Verifying csi-hostpath-driver addon...
	I0311 12:55:02.246165 1130717 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0311 12:55:02.246960 1130717 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0311 12:55:02.248191 1130717 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.1
	I0311 12:55:02.250187 1130717 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0311 12:55:02.250224 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0311 12:55:02.274133 1130717 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0311 12:55:02.274166 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0311 12:55:02.283483 1130717 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0311 12:55:02.283509 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:02.298160 1130717 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0311 12:55:02.298184 1130717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0311 12:55:02.319137 1130717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0311 12:55:02.324838 1130717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0311 12:55:02.489840 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:02.492967 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:02.753033 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:03.090770 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:03.092321 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:03.269262 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:03.469927 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:03.473966 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:03.752535 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:03.954530 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:03.964157 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:04.271141 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:04.386708 1130717 node_ready.go:53] node "addons-127043" has status "Ready":"False"
	I0311 12:55:04.491249 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:04.575975 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:04.797736 1130717 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.478559856s)
	I0311 12:55:04.798044 1130717 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.47312402s)
	I0311 12:55:04.799499 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:04.801987 1130717 addons.go:470] Verifying addon gcp-auth=true in "addons-127043"
	I0311 12:55:04.804270 1130717 out.go:177] * Verifying gcp-auth addon...
	I0311 12:55:04.807197 1130717 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0311 12:55:04.831906 1130717 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0311 12:55:04.831979 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:04.954313 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:04.962163 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:05.252794 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:05.310957 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:05.455717 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:05.463307 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:05.753324 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:05.811773 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:05.955539 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:05.963927 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:06.252620 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:06.311422 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:06.454178 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:06.462209 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:06.753127 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:06.814403 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:06.878938 1130717 node_ready.go:53] node "addons-127043" has status "Ready":"False"
	I0311 12:55:06.954750 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:06.965135 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:07.254575 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:07.311648 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:07.455137 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:07.463964 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:07.754479 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:07.812091 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:07.954726 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:07.963069 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:08.252966 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:08.311755 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:08.454952 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:08.462306 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:08.752842 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:08.811354 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:08.955161 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:08.962817 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:09.253071 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:09.311693 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:09.378731 1130717 node_ready.go:53] node "addons-127043" has status "Ready":"False"
	I0311 12:55:09.455815 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:09.468251 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:09.753916 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:09.811629 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:09.954446 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:09.962359 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:10.252528 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:10.311518 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:10.454297 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:10.463524 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:10.752594 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:10.811086 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:10.954853 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:10.962889 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:11.253051 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:11.311847 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:11.455071 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:11.462216 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:11.752967 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:11.811444 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:11.878957 1130717 node_ready.go:53] node "addons-127043" has status "Ready":"False"
	I0311 12:55:11.954766 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:11.962951 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:12.252613 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:12.311675 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:12.454084 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:12.462495 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:12.752637 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:12.811192 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:12.954392 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:12.962362 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:13.252526 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:13.311498 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:13.459867 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:13.462655 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:13.753217 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:13.811086 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:13.955132 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:13.962792 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:14.253269 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:14.311499 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:14.378881 1130717 node_ready.go:53] node "addons-127043" has status "Ready":"False"
	I0311 12:55:14.454978 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:14.462209 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:14.753220 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:14.812306 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:14.955156 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:14.962349 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:15.253244 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:15.311517 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:15.454459 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:15.462564 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:15.752809 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:15.811880 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:15.954261 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:15.962642 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:16.252960 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:16.311023 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:16.379066 1130717 node_ready.go:53] node "addons-127043" has status "Ready":"False"
	I0311 12:55:16.454780 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:16.463142 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:16.753561 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:16.811638 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:16.954682 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:16.962915 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:17.253539 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:17.311018 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:17.454785 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:17.462933 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:17.753358 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:17.810967 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:17.954040 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:17.962007 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:18.253389 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:18.311344 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:18.455125 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:18.462266 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:18.754518 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:18.810954 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:18.879091 1130717 node_ready.go:53] node "addons-127043" has status "Ready":"False"
	I0311 12:55:18.955391 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:18.962755 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:19.252868 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:19.311322 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:19.454254 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:19.462470 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:19.752791 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:19.810804 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:19.954577 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:19.962872 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:20.253155 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:20.311265 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:20.455106 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:20.462311 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:20.752569 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:20.811614 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:20.954592 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:20.962749 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:21.253169 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:21.311624 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:21.378821 1130717 node_ready.go:53] node "addons-127043" has status "Ready":"False"
	I0311 12:55:21.454375 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:21.462553 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:21.752742 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:21.811307 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:21.954967 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:21.962282 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:22.252453 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:22.311319 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:22.454720 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:22.462783 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:22.753021 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:22.811513 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:22.955223 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:22.962346 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:23.255056 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:23.311745 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:23.466941 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:23.469770 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:23.753044 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:23.811195 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:23.878458 1130717 node_ready.go:53] node "addons-127043" has status "Ready":"False"
	I0311 12:55:23.954813 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:23.963396 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:24.252900 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:24.310767 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:24.454403 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:24.462853 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:24.753612 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:24.811312 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:24.954457 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:24.962286 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:25.253669 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:25.312826 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:25.454747 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:25.462128 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:25.753728 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:25.811270 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:25.879241 1130717 node_ready.go:53] node "addons-127043" has status "Ready":"False"
	I0311 12:55:25.956153 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:25.966560 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:26.252443 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:26.311226 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:26.454986 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:26.461946 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:26.752573 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:26.811208 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:26.954497 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:26.962508 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:27.252490 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:27.311361 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:27.455387 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:27.463605 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:27.752971 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:27.811055 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:27.954377 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:27.962661 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:28.252889 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:28.311467 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:28.378957 1130717 node_ready.go:53] node "addons-127043" has status "Ready":"False"
	I0311 12:55:28.455190 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:28.462473 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:28.752472 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:28.811150 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:28.954938 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:28.964784 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:29.253200 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:29.310907 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:29.454633 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:29.462822 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:29.753326 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:29.811408 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:29.974404 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:29.974654 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:30.253527 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:30.311781 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:30.454518 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:30.462649 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:30.752965 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:30.811845 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:30.878854 1130717 node_ready.go:53] node "addons-127043" has status "Ready":"False"
	I0311 12:55:30.969407 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:30.992578 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:31.287714 1130717 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0311 12:55:31.287785 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:31.364054 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:31.392934 1130717 node_ready.go:49] node "addons-127043" has status "Ready":"True"
	I0311 12:55:31.393005 1130717 node_ready.go:38] duration metric: took 36.518294467s for node "addons-127043" to be "Ready" ...
	I0311 12:55:31.393031 1130717 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 12:55:31.420943 1130717 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-db48g" in "kube-system" namespace to be "Ready" ...
	I0311 12:55:31.464546 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:31.475377 1130717 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0311 12:55:31.475491 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:31.775519 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:31.815164 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:32.005751 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:32.039998 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:32.255595 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:32.311095 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:32.455152 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:32.464129 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:32.760951 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:32.819401 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:33.006360 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:33.006789 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:33.257576 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:33.311244 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:33.429264 1130717 pod_ready.go:92] pod "coredns-5dd5756b68-db48g" in "kube-system" namespace has status "Ready":"True"
	I0311 12:55:33.429298 1130717 pod_ready.go:81] duration metric: took 2.008283839s for pod "coredns-5dd5756b68-db48g" in "kube-system" namespace to be "Ready" ...
	I0311 12:55:33.429324 1130717 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-127043" in "kube-system" namespace to be "Ready" ...
	I0311 12:55:33.437434 1130717 pod_ready.go:92] pod "etcd-addons-127043" in "kube-system" namespace has status "Ready":"True"
	I0311 12:55:33.437472 1130717 pod_ready.go:81] duration metric: took 8.140162ms for pod "etcd-addons-127043" in "kube-system" namespace to be "Ready" ...
	I0311 12:55:33.437494 1130717 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-127043" in "kube-system" namespace to be "Ready" ...
	I0311 12:55:33.447803 1130717 pod_ready.go:92] pod "kube-apiserver-addons-127043" in "kube-system" namespace has status "Ready":"True"
	I0311 12:55:33.447830 1130717 pod_ready.go:81] duration metric: took 10.327324ms for pod "kube-apiserver-addons-127043" in "kube-system" namespace to be "Ready" ...
	I0311 12:55:33.447842 1130717 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-127043" in "kube-system" namespace to be "Ready" ...
	I0311 12:55:33.459507 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:33.466940 1130717 pod_ready.go:92] pod "kube-controller-manager-addons-127043" in "kube-system" namespace has status "Ready":"True"
	I0311 12:55:33.466970 1130717 pod_ready.go:81] duration metric: took 19.120967ms for pod "kube-controller-manager-addons-127043" in "kube-system" namespace to be "Ready" ...
	I0311 12:55:33.466988 1130717 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gzphw" in "kube-system" namespace to be "Ready" ...
	I0311 12:55:33.471591 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:33.491855 1130717 pod_ready.go:92] pod "kube-proxy-gzphw" in "kube-system" namespace has status "Ready":"True"
	I0311 12:55:33.491901 1130717 pod_ready.go:81] duration metric: took 24.897135ms for pod "kube-proxy-gzphw" in "kube-system" namespace to be "Ready" ...
	I0311 12:55:33.491913 1130717 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-127043" in "kube-system" namespace to be "Ready" ...
	I0311 12:55:33.754899 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:33.811455 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:33.825759 1130717 pod_ready.go:92] pod "kube-scheduler-addons-127043" in "kube-system" namespace has status "Ready":"True"
	I0311 12:55:33.825785 1130717 pod_ready.go:81] duration metric: took 333.862919ms for pod "kube-scheduler-addons-127043" in "kube-system" namespace to be "Ready" ...
	I0311 12:55:33.825797 1130717 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-69cf46c98-hlhsg" in "kube-system" namespace to be "Ready" ...
	I0311 12:55:33.954715 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:33.965913 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:34.254983 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:34.311415 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:34.453983 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:34.463017 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:34.755547 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:34.811331 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:34.955134 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:34.963211 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:35.255537 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:35.311472 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:35.455956 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:35.464112 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:35.754595 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:35.811658 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:35.834056 1130717 pod_ready.go:102] pod "metrics-server-69cf46c98-hlhsg" in "kube-system" namespace has status "Ready":"False"
	I0311 12:55:35.956400 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:35.968605 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:36.255408 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:36.315147 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:36.456099 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:36.465907 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:36.755762 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:36.812100 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:36.954884 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:36.963894 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:37.254239 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:37.312827 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:37.457272 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:37.463807 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:37.756504 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:37.811724 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:37.957424 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:37.964865 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:38.256198 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:38.312614 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:38.334741 1130717 pod_ready.go:102] pod "metrics-server-69cf46c98-hlhsg" in "kube-system" namespace has status "Ready":"False"
	I0311 12:55:38.459120 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:38.465910 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:38.755000 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:38.816113 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:38.955308 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:38.966348 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:39.258696 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:39.310816 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:39.454683 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:39.463302 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:39.753961 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:39.810972 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:39.954622 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:39.963166 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:40.254524 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:40.329021 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:40.359879 1130717 pod_ready.go:102] pod "metrics-server-69cf46c98-hlhsg" in "kube-system" namespace has status "Ready":"False"
	I0311 12:55:40.459782 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:40.470829 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:40.755793 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:40.811836 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:40.960714 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:40.974671 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:41.254549 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:41.311936 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:41.455439 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:41.464246 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:41.754953 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:41.811807 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:41.954626 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:41.969481 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:42.263231 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:42.318306 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:42.381507 1130717 pod_ready.go:102] pod "metrics-server-69cf46c98-hlhsg" in "kube-system" namespace has status "Ready":"False"
	I0311 12:55:42.455353 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:42.470327 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:42.755605 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:42.816404 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:42.997975 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:43.000326 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:43.258207 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:43.312369 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:43.469926 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:43.478635 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:43.764991 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:43.826102 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:43.956267 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:43.968533 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:44.257334 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:44.311757 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:44.456464 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:44.464682 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:44.755841 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:44.813915 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:44.841620 1130717 pod_ready.go:102] pod "metrics-server-69cf46c98-hlhsg" in "kube-system" namespace has status "Ready":"False"
	I0311 12:55:44.984399 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:44.986773 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:45.258521 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:45.312027 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:45.455475 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:45.464216 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:45.755604 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:45.811022 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:45.954369 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:45.963326 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:46.257114 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:46.311938 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:46.455375 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:46.463896 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:46.754849 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:46.811793 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:46.955090 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:46.968348 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:47.254352 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:47.311054 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:47.333626 1130717 pod_ready.go:102] pod "metrics-server-69cf46c98-hlhsg" in "kube-system" namespace has status "Ready":"False"
	I0311 12:55:47.454650 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:47.463439 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:47.754779 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:47.811785 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:47.958516 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:47.966650 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:48.261781 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:48.312153 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:48.455984 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:48.463716 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:48.754446 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:48.811289 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:48.954903 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:48.963579 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:49.255733 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:49.312476 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:49.454769 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:49.465145 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:49.754257 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:49.810949 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:49.832484 1130717 pod_ready.go:102] pod "metrics-server-69cf46c98-hlhsg" in "kube-system" namespace has status "Ready":"False"
	I0311 12:55:49.957395 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:49.964855 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:50.259893 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:50.312262 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:50.455720 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:50.465033 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:50.760370 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:50.818721 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:50.954837 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:50.967198 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:51.256830 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:51.342815 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:51.360930 1130717 pod_ready.go:92] pod "metrics-server-69cf46c98-hlhsg" in "kube-system" namespace has status "Ready":"True"
	I0311 12:55:51.360957 1130717 pod_ready.go:81] duration metric: took 17.535151353s for pod "metrics-server-69cf46c98-hlhsg" in "kube-system" namespace to be "Ready" ...
	I0311 12:55:51.360970 1130717 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-kh579" in "kube-system" namespace to be "Ready" ...
	I0311 12:55:51.460694 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:51.467584 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:51.754280 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:51.811175 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:51.955216 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:51.963660 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:52.280334 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:52.311568 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:52.455777 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:52.467989 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:52.756521 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:52.812279 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:52.956784 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:52.965741 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:53.256928 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:53.312463 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:53.369184 1130717 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kh579" in "kube-system" namespace has status "Ready":"False"
	I0311 12:55:53.455556 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:53.463737 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:53.758251 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:53.810858 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:53.954538 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:53.963372 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:54.260704 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:54.311080 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:54.455380 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:54.466564 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:54.754774 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:54.812042 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:54.954372 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:54.962967 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:55.253566 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:55.311879 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:55.478280 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:55.482286 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:55.754399 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:55.811857 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:55.868290 1130717 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kh579" in "kube-system" namespace has status "Ready":"False"
	I0311 12:55:55.955370 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:55.963348 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:56.255210 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:56.312179 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:56.457169 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:56.463434 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:56.754209 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:56.812166 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:56.959575 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:56.966396 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:57.254162 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:57.311890 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:57.459880 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:57.464044 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:57.753493 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:57.811067 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:57.955025 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:57.962639 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:58.253330 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:58.312593 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:58.367926 1130717 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kh579" in "kube-system" namespace has status "Ready":"False"
	I0311 12:55:58.454791 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:58.463539 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:58.753867 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:58.811569 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:58.954619 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:58.963174 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:59.255560 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:59.313524 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:59.457275 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:59.462968 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:55:59.754484 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:55:59.814900 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:55:59.956085 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:55:59.968635 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:56:00.355321 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:00.356352 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:00.434434 1130717 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kh579" in "kube-system" namespace has status "Ready":"False"
	I0311 12:56:00.456406 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:00.505956 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:56:00.754563 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:00.818726 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:00.955936 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:00.964348 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:56:01.258019 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:01.312928 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:01.455125 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:01.468208 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:56:01.760710 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:01.812130 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:01.956022 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:01.963278 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:56:02.256456 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:02.312238 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:02.454484 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:02.465522 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:56:02.771999 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:02.810987 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:02.867734 1130717 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kh579" in "kube-system" namespace has status "Ready":"False"
	I0311 12:56:02.955231 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:02.963728 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:56:03.257096 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:03.313079 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:03.454734 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:03.463535 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:56:03.753931 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:03.811392 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:03.867310 1130717 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-kh579" in "kube-system" namespace has status "Ready":"True"
	I0311 12:56:03.867336 1130717 pod_ready.go:81] duration metric: took 12.506330188s for pod "nvidia-device-plugin-daemonset-kh579" in "kube-system" namespace to be "Ready" ...
	I0311 12:56:03.867363 1130717 pod_ready.go:38] duration metric: took 32.474266003s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 12:56:03.867388 1130717 api_server.go:52] waiting for apiserver process to appear ...
	I0311 12:56:03.867437 1130717 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 12:56:03.867506 1130717 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 12:56:03.917087 1130717 cri.go:89] found id: "ff3f137f54ffc94f3823341d3b6688b13fb5ce25b286c6e67d66368e37f89683"
	I0311 12:56:03.917109 1130717 cri.go:89] found id: ""
	I0311 12:56:03.917117 1130717 logs.go:276] 1 containers: [ff3f137f54ffc94f3823341d3b6688b13fb5ce25b286c6e67d66368e37f89683]
	I0311 12:56:03.917172 1130717 ssh_runner.go:195] Run: which crictl
	I0311 12:56:03.921607 1130717 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 12:56:03.921706 1130717 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 12:56:03.957363 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:03.968074 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:56:03.980166 1130717 cri.go:89] found id: "e01b8fd7f7b44381d8be2524ecaf16bb6c0c7acbc042413dd2b6cf0bd5d605dd"
	I0311 12:56:03.980229 1130717 cri.go:89] found id: ""
	I0311 12:56:03.980254 1130717 logs.go:276] 1 containers: [e01b8fd7f7b44381d8be2524ecaf16bb6c0c7acbc042413dd2b6cf0bd5d605dd]
	I0311 12:56:03.980342 1130717 ssh_runner.go:195] Run: which crictl
	I0311 12:56:03.984003 1130717 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 12:56:03.984127 1130717 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 12:56:04.038178 1130717 cri.go:89] found id: "cffd732056d93edde7c9216cb58e49cd860621dbe808e50305aab14597394959"
	I0311 12:56:04.038202 1130717 cri.go:89] found id: ""
	I0311 12:56:04.038211 1130717 logs.go:276] 1 containers: [cffd732056d93edde7c9216cb58e49cd860621dbe808e50305aab14597394959]
	I0311 12:56:04.038269 1130717 ssh_runner.go:195] Run: which crictl
	I0311 12:56:04.041933 1130717 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 12:56:04.042014 1130717 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 12:56:04.083966 1130717 cri.go:89] found id: "7b43cf1b2e4e4cd8f4ce8b703754b5049b7213d716489e32fc74f7290f8fcf77"
	I0311 12:56:04.083990 1130717 cri.go:89] found id: ""
	I0311 12:56:04.083998 1130717 logs.go:276] 1 containers: [7b43cf1b2e4e4cd8f4ce8b703754b5049b7213d716489e32fc74f7290f8fcf77]
	I0311 12:56:04.084060 1130717 ssh_runner.go:195] Run: which crictl
	I0311 12:56:04.087715 1130717 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 12:56:04.087840 1130717 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 12:56:04.128765 1130717 cri.go:89] found id: "5135080bbf38c9801a52764e164d73158f65a9e454b529bd13d35fc51bf3156c"
	I0311 12:56:04.128795 1130717 cri.go:89] found id: ""
	I0311 12:56:04.128803 1130717 logs.go:276] 1 containers: [5135080bbf38c9801a52764e164d73158f65a9e454b529bd13d35fc51bf3156c]
	I0311 12:56:04.128860 1130717 ssh_runner.go:195] Run: which crictl
	I0311 12:56:04.133034 1130717 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 12:56:04.133107 1130717 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 12:56:04.172517 1130717 cri.go:89] found id: "ade1f425f1126c92469833140472a84a44830c44c241e54bb235e10ed027c387"
	I0311 12:56:04.172540 1130717 cri.go:89] found id: ""
	I0311 12:56:04.172550 1130717 logs.go:276] 1 containers: [ade1f425f1126c92469833140472a84a44830c44c241e54bb235e10ed027c387]
	I0311 12:56:04.172608 1130717 ssh_runner.go:195] Run: which crictl
	I0311 12:56:04.176166 1130717 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 12:56:04.176239 1130717 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 12:56:04.219026 1130717 cri.go:89] found id: "1dda8ce7ac24dd29d25623e8e847e1f1c433ec95d8d2ce6a22b1e33456eed310"
	I0311 12:56:04.219058 1130717 cri.go:89] found id: ""
	I0311 12:56:04.219066 1130717 logs.go:276] 1 containers: [1dda8ce7ac24dd29d25623e8e847e1f1c433ec95d8d2ce6a22b1e33456eed310]
	I0311 12:56:04.219144 1130717 ssh_runner.go:195] Run: which crictl
	I0311 12:56:04.222948 1130717 logs.go:123] Gathering logs for kindnet [1dda8ce7ac24dd29d25623e8e847e1f1c433ec95d8d2ce6a22b1e33456eed310] ...
	I0311 12:56:04.222975 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1dda8ce7ac24dd29d25623e8e847e1f1c433ec95d8d2ce6a22b1e33456eed310"
	I0311 12:56:04.254631 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:04.266556 1130717 logs.go:123] Gathering logs for CRI-O ...
	I0311 12:56:04.266581 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 12:56:04.317874 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:04.354049 1130717 logs.go:123] Gathering logs for describe nodes ...
	I0311 12:56:04.354085 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 12:56:04.457395 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:04.464059 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:56:04.534365 1130717 logs.go:123] Gathering logs for etcd [e01b8fd7f7b44381d8be2524ecaf16bb6c0c7acbc042413dd2b6cf0bd5d605dd] ...
	I0311 12:56:04.534444 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e01b8fd7f7b44381d8be2524ecaf16bb6c0c7acbc042413dd2b6cf0bd5d605dd"
	I0311 12:56:04.595053 1130717 logs.go:123] Gathering logs for kube-scheduler [7b43cf1b2e4e4cd8f4ce8b703754b5049b7213d716489e32fc74f7290f8fcf77] ...
	I0311 12:56:04.595095 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b43cf1b2e4e4cd8f4ce8b703754b5049b7213d716489e32fc74f7290f8fcf77"
	I0311 12:56:04.658642 1130717 logs.go:123] Gathering logs for kube-proxy [5135080bbf38c9801a52764e164d73158f65a9e454b529bd13d35fc51bf3156c] ...
	I0311 12:56:04.658678 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5135080bbf38c9801a52764e164d73158f65a9e454b529bd13d35fc51bf3156c"
	I0311 12:56:04.752920 1130717 logs.go:123] Gathering logs for kube-controller-manager [ade1f425f1126c92469833140472a84a44830c44c241e54bb235e10ed027c387] ...
	I0311 12:56:04.752992 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ade1f425f1126c92469833140472a84a44830c44c241e54bb235e10ed027c387"
	I0311 12:56:04.755353 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:04.812998 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:04.866490 1130717 logs.go:123] Gathering logs for container status ...
	I0311 12:56:04.866532 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 12:56:04.954833 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:04.963842 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:56:05.090309 1130717 logs.go:123] Gathering logs for kubelet ...
	I0311 12:56:05.090341 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0311 12:56:05.177322 1130717 logs.go:138] Found kubelet problem: Mar 11 12:54:54 addons-127043 kubelet[1487]: W0311 12:54:54.202216    1487 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-127043' and this object
	W0311 12:56:05.177583 1130717 logs.go:138] Found kubelet problem: Mar 11 12:54:54 addons-127043 kubelet[1487]: E0311 12:54:54.202275    1487 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-127043' and this object
	W0311 12:56:05.203156 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: W0311 12:55:31.001266    1487 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-127043" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-127043' and this object
	W0311 12:56:05.203356 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: W0311 12:55:31.001301    1487 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-127043' and this object
	W0311 12:56:05.203544 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: E0311 12:55:31.001319    1487 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-127043" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-127043' and this object
	W0311 12:56:05.203747 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: E0311 12:55:31.001333    1487 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-127043' and this object
	W0311 12:56:05.203933 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: W0311 12:55:31.001861    1487 reflector.go:535] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-127043' and this object
	W0311 12:56:05.204140 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: E0311 12:55:31.001899    1487 reflector.go:147] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-127043' and this object
	W0311 12:56:05.204313 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: W0311 12:55:31.001951    1487 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-127043" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-127043' and this object
	W0311 12:56:05.204496 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: E0311 12:55:31.001962    1487 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-127043" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-127043' and this object
	W0311 12:56:05.204681 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: W0311 12:55:31.002237    1487 reflector.go:535] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-127043' and this object
	W0311 12:56:05.204885 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: E0311 12:55:31.002256    1487 reflector.go:147] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-127043' and this object
	W0311 12:56:05.206344 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: W0311 12:55:31.012200    1487 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-127043' and this object
	W0311 12:56:05.206551 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: E0311 12:55:31.012237    1487 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-127043' and this object
	I0311 12:56:05.239906 1130717 logs.go:123] Gathering logs for dmesg ...
	I0311 12:56:05.239949 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 12:56:05.255510 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:05.270908 1130717 logs.go:123] Gathering logs for kube-apiserver [ff3f137f54ffc94f3823341d3b6688b13fb5ce25b286c6e67d66368e37f89683] ...
	I0311 12:56:05.270936 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff3f137f54ffc94f3823341d3b6688b13fb5ce25b286c6e67d66368e37f89683"
	I0311 12:56:05.311311 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:05.455219 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:05.466976 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:56:05.491851 1130717 logs.go:123] Gathering logs for coredns [cffd732056d93edde7c9216cb58e49cd860621dbe808e50305aab14597394959] ...
	I0311 12:56:05.491891 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cffd732056d93edde7c9216cb58e49cd860621dbe808e50305aab14597394959"
	I0311 12:56:05.619074 1130717 out.go:304] Setting ErrFile to fd 2...
	I0311 12:56:05.619098 1130717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0311 12:56:05.619144 1130717 out.go:239] X Problems detected in kubelet:
	W0311 12:56:05.619154 1130717 out.go:239]   Mar 11 12:55:31 addons-127043 kubelet[1487]: E0311 12:55:31.001962    1487 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-127043" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-127043' and this object
	W0311 12:56:05.619161 1130717 out.go:239]   Mar 11 12:55:31 addons-127043 kubelet[1487]: W0311 12:55:31.002237    1487 reflector.go:535] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-127043' and this object
	W0311 12:56:05.619169 1130717 out.go:239]   Mar 11 12:55:31 addons-127043 kubelet[1487]: E0311 12:55:31.002256    1487 reflector.go:147] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-127043' and this object
	W0311 12:56:05.619175 1130717 out.go:239]   Mar 11 12:55:31 addons-127043 kubelet[1487]: W0311 12:55:31.012200    1487 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-127043' and this object
	W0311 12:56:05.619180 1130717 out.go:239]   Mar 11 12:55:31 addons-127043 kubelet[1487]: E0311 12:55:31.012237    1487 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-127043' and this object
	I0311 12:56:05.619190 1130717 out.go:304] Setting ErrFile to fd 2...
	I0311 12:56:05.619196 1130717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 12:56:05.757136 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:05.813108 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:05.960633 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:05.964516 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:56:06.255499 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:06.311022 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:06.456333 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:06.474269 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:56:06.755162 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:06.811694 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:06.959105 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:06.964759 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:56:07.257523 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:07.311450 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:07.456203 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:07.464965 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:56:07.756484 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:07.813535 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:07.955935 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:07.969353 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:56:08.266901 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:08.311629 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:08.454768 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:08.463211 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 12:56:08.756004 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:08.812093 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:08.955493 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:08.962987 1130717 kapi.go:107] duration metric: took 1m7.007937849s to wait for kubernetes.io/minikube-addons=registry ...
	I0311 12:56:09.254651 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:09.315113 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:09.461012 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:09.770797 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:09.811672 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:09.955561 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:10.256569 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:10.311725 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:10.455162 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:10.754799 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:10.811800 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:10.954914 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:11.254154 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:11.312339 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:11.454740 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:11.756674 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:11.811642 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:11.960571 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:12.257162 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:12.312642 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:12.456830 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:12.756584 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:12.811312 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:12.955770 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:13.255534 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:13.311144 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:13.458615 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:13.754475 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:13.833490 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:13.955019 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:14.255614 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:14.311583 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:14.455157 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:14.754223 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:14.827788 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:14.954499 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:15.255910 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:15.312187 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:15.455219 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:15.620539 1130717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 12:56:15.642438 1130717 api_server.go:72] duration metric: took 1m21.187332987s to wait for apiserver process to appear ...
	I0311 12:56:15.642466 1130717 api_server.go:88] waiting for apiserver healthz status ...
	I0311 12:56:15.642510 1130717 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 12:56:15.642577 1130717 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 12:56:15.693359 1130717 cri.go:89] found id: "ff3f137f54ffc94f3823341d3b6688b13fb5ce25b286c6e67d66368e37f89683"
	I0311 12:56:15.693380 1130717 cri.go:89] found id: ""
	I0311 12:56:15.693388 1130717 logs.go:276] 1 containers: [ff3f137f54ffc94f3823341d3b6688b13fb5ce25b286c6e67d66368e37f89683]
	I0311 12:56:15.693485 1130717 ssh_runner.go:195] Run: which crictl
	I0311 12:56:15.697987 1130717 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 12:56:15.698056 1130717 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 12:56:15.738070 1130717 cri.go:89] found id: "e01b8fd7f7b44381d8be2524ecaf16bb6c0c7acbc042413dd2b6cf0bd5d605dd"
	I0311 12:56:15.738095 1130717 cri.go:89] found id: ""
	I0311 12:56:15.738103 1130717 logs.go:276] 1 containers: [e01b8fd7f7b44381d8be2524ecaf16bb6c0c7acbc042413dd2b6cf0bd5d605dd]
	I0311 12:56:15.738159 1130717 ssh_runner.go:195] Run: which crictl
	I0311 12:56:15.742040 1130717 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 12:56:15.742115 1130717 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 12:56:15.756405 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:15.811816 1130717 cri.go:89] found id: "cffd732056d93edde7c9216cb58e49cd860621dbe808e50305aab14597394959"
	I0311 12:56:15.811839 1130717 cri.go:89] found id: ""
	I0311 12:56:15.811847 1130717 logs.go:276] 1 containers: [cffd732056d93edde7c9216cb58e49cd860621dbe808e50305aab14597394959]
	I0311 12:56:15.811913 1130717 ssh_runner.go:195] Run: which crictl
	I0311 12:56:15.821712 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:15.828298 1130717 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 12:56:15.828383 1130717 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 12:56:15.903717 1130717 cri.go:89] found id: "7b43cf1b2e4e4cd8f4ce8b703754b5049b7213d716489e32fc74f7290f8fcf77"
	I0311 12:56:15.903740 1130717 cri.go:89] found id: ""
	I0311 12:56:15.903748 1130717 logs.go:276] 1 containers: [7b43cf1b2e4e4cd8f4ce8b703754b5049b7213d716489e32fc74f7290f8fcf77]
	I0311 12:56:15.903801 1130717 ssh_runner.go:195] Run: which crictl
	I0311 12:56:15.913168 1130717 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 12:56:15.913249 1130717 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 12:56:15.960302 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:15.978358 1130717 cri.go:89] found id: "5135080bbf38c9801a52764e164d73158f65a9e454b529bd13d35fc51bf3156c"
	I0311 12:56:15.978427 1130717 cri.go:89] found id: ""
	I0311 12:56:15.978450 1130717 logs.go:276] 1 containers: [5135080bbf38c9801a52764e164d73158f65a9e454b529bd13d35fc51bf3156c]
	I0311 12:56:15.978537 1130717 ssh_runner.go:195] Run: which crictl
	I0311 12:56:15.993962 1130717 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 12:56:15.994100 1130717 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 12:56:16.051903 1130717 cri.go:89] found id: "ade1f425f1126c92469833140472a84a44830c44c241e54bb235e10ed027c387"
	I0311 12:56:16.051975 1130717 cri.go:89] found id: ""
	I0311 12:56:16.052010 1130717 logs.go:276] 1 containers: [ade1f425f1126c92469833140472a84a44830c44c241e54bb235e10ed027c387]
	I0311 12:56:16.052107 1130717 ssh_runner.go:195] Run: which crictl
	I0311 12:56:16.058188 1130717 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 12:56:16.058320 1130717 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 12:56:16.109113 1130717 cri.go:89] found id: "1dda8ce7ac24dd29d25623e8e847e1f1c433ec95d8d2ce6a22b1e33456eed310"
	I0311 12:56:16.109137 1130717 cri.go:89] found id: ""
	I0311 12:56:16.109145 1130717 logs.go:276] 1 containers: [1dda8ce7ac24dd29d25623e8e847e1f1c433ec95d8d2ce6a22b1e33456eed310]
	I0311 12:56:16.109218 1130717 ssh_runner.go:195] Run: which crictl
	I0311 12:56:16.112813 1130717 logs.go:123] Gathering logs for coredns [cffd732056d93edde7c9216cb58e49cd860621dbe808e50305aab14597394959] ...
	I0311 12:56:16.112836 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cffd732056d93edde7c9216cb58e49cd860621dbe808e50305aab14597394959"
	I0311 12:56:16.163736 1130717 logs.go:123] Gathering logs for kube-scheduler [7b43cf1b2e4e4cd8f4ce8b703754b5049b7213d716489e32fc74f7290f8fcf77] ...
	I0311 12:56:16.163766 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b43cf1b2e4e4cd8f4ce8b703754b5049b7213d716489e32fc74f7290f8fcf77"
	I0311 12:56:16.212888 1130717 logs.go:123] Gathering logs for kube-proxy [5135080bbf38c9801a52764e164d73158f65a9e454b529bd13d35fc51bf3156c] ...
	I0311 12:56:16.212921 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5135080bbf38c9801a52764e164d73158f65a9e454b529bd13d35fc51bf3156c"
	I0311 12:56:16.257305 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:16.275185 1130717 logs.go:123] Gathering logs for kube-controller-manager [ade1f425f1126c92469833140472a84a44830c44c241e54bb235e10ed027c387] ...
	I0311 12:56:16.275256 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ade1f425f1126c92469833140472a84a44830c44c241e54bb235e10ed027c387"
	I0311 12:56:16.313472 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:16.359565 1130717 logs.go:123] Gathering logs for CRI-O ...
	I0311 12:56:16.359638 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 12:56:16.449179 1130717 logs.go:123] Gathering logs for dmesg ...
	I0311 12:56:16.449217 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 12:56:16.456913 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:16.476601 1130717 logs.go:123] Gathering logs for describe nodes ...
	I0311 12:56:16.476633 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 12:56:16.665113 1130717 logs.go:123] Gathering logs for kube-apiserver [ff3f137f54ffc94f3823341d3b6688b13fb5ce25b286c6e67d66368e37f89683] ...
	I0311 12:56:16.665145 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff3f137f54ffc94f3823341d3b6688b13fb5ce25b286c6e67d66368e37f89683"
	I0311 12:56:16.747897 1130717 logs.go:123] Gathering logs for container status ...
	I0311 12:56:16.747937 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 12:56:16.755179 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:16.817959 1130717 logs.go:123] Gathering logs for kubelet ...
	I0311 12:56:16.817989 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 12:56:16.851037 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0311 12:56:16.900112 1130717 logs.go:138] Found kubelet problem: Mar 11 12:54:54 addons-127043 kubelet[1487]: W0311 12:54:54.202216    1487 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-127043' and this object
	W0311 12:56:16.900336 1130717 logs.go:138] Found kubelet problem: Mar 11 12:54:54 addons-127043 kubelet[1487]: E0311 12:54:54.202275    1487 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-127043' and this object
	W0311 12:56:16.931093 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: W0311 12:55:31.001266    1487 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-127043" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-127043' and this object
	W0311 12:56:16.931297 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: W0311 12:55:31.001301    1487 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-127043' and this object
	W0311 12:56:16.931490 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: E0311 12:55:31.001319    1487 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-127043" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-127043' and this object
	W0311 12:56:16.931699 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: E0311 12:55:31.001333    1487 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-127043' and this object
	W0311 12:56:16.931886 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: W0311 12:55:31.001861    1487 reflector.go:535] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-127043' and this object
	W0311 12:56:16.932097 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: E0311 12:55:31.001899    1487 reflector.go:147] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-127043' and this object
	W0311 12:56:16.932262 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: W0311 12:55:31.001951    1487 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-127043" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-127043' and this object
	W0311 12:56:16.932447 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: E0311 12:55:31.001962    1487 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-127043" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-127043' and this object
	W0311 12:56:16.932636 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: W0311 12:55:31.002237    1487 reflector.go:535] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-127043' and this object
	W0311 12:56:16.932845 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: E0311 12:55:31.002256    1487 reflector.go:147] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-127043' and this object
	W0311 12:56:16.934295 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: W0311 12:55:31.012200    1487 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-127043' and this object
	W0311 12:56:16.934505 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: E0311 12:55:31.012237    1487 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-127043' and this object
	I0311 12:56:16.958623 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:16.977907 1130717 logs.go:123] Gathering logs for etcd [e01b8fd7f7b44381d8be2524ecaf16bb6c0c7acbc042413dd2b6cf0bd5d605dd] ...
	I0311 12:56:16.977944 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e01b8fd7f7b44381d8be2524ecaf16bb6c0c7acbc042413dd2b6cf0bd5d605dd"
	I0311 12:56:17.041418 1130717 logs.go:123] Gathering logs for kindnet [1dda8ce7ac24dd29d25623e8e847e1f1c433ec95d8d2ce6a22b1e33456eed310] ...
	I0311 12:56:17.041451 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1dda8ce7ac24dd29d25623e8e847e1f1c433ec95d8d2ce6a22b1e33456eed310"
	I0311 12:56:17.085434 1130717 out.go:304] Setting ErrFile to fd 2...
	I0311 12:56:17.085460 1130717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0311 12:56:17.085528 1130717 out.go:239] X Problems detected in kubelet:
	W0311 12:56:17.085538 1130717 out.go:239]   Mar 11 12:55:31 addons-127043 kubelet[1487]: E0311 12:55:31.001962    1487 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-127043" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-127043' and this object
	W0311 12:56:17.085674 1130717 out.go:239]   Mar 11 12:55:31 addons-127043 kubelet[1487]: W0311 12:55:31.002237    1487 reflector.go:535] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-127043' and this object
	W0311 12:56:17.085683 1130717 out.go:239]   Mar 11 12:55:31 addons-127043 kubelet[1487]: E0311 12:55:31.002256    1487 reflector.go:147] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-127043' and this object
	W0311 12:56:17.085690 1130717 out.go:239]   Mar 11 12:55:31 addons-127043 kubelet[1487]: W0311 12:55:31.012200    1487 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-127043' and this object
	W0311 12:56:17.085697 1130717 out.go:239]   Mar 11 12:55:31 addons-127043 kubelet[1487]: E0311 12:55:31.012237    1487 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-127043' and this object
	I0311 12:56:17.085708 1130717 out.go:304] Setting ErrFile to fd 2...
	I0311 12:56:17.085713 1130717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 12:56:17.254252 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:17.311507 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:17.455832 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:17.754109 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:17.812361 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:17.984496 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:18.260518 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:18.311821 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:18.456253 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:18.754172 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:18.811651 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:18.956988 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:19.254206 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:19.312542 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:19.455102 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:19.754145 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:19.812184 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 12:56:19.962022 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:20.254643 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:20.311719 1130717 kapi.go:107] duration metric: took 1m15.504520439s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0311 12:56:20.314791 1130717 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-127043 cluster.
	I0311 12:56:20.317467 1130717 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0311 12:56:20.320678 1130717 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0311 12:56:20.456869 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:20.755676 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:20.956487 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:21.255115 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:21.455201 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:21.754066 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:21.957705 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:22.255544 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:22.456361 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:22.754067 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:22.954735 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:23.253643 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:23.455933 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:23.756933 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:23.955519 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:24.259757 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:24.457566 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:24.753748 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:24.955724 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:25.254023 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:25.455073 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:25.755030 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:25.956770 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:26.256615 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:26.454938 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:26.757515 1130717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 12:56:26.955102 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:27.086355 1130717 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 12:56:27.104882 1130717 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0311 12:56:27.106509 1130717 api_server.go:141] control plane version: v1.28.4
	I0311 12:56:27.106574 1130717 api_server.go:131] duration metric: took 11.464090447s to wait for apiserver health ...
	I0311 12:56:27.106592 1130717 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 12:56:27.106620 1130717 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 12:56:27.106702 1130717 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 12:56:27.172734 1130717 cri.go:89] found id: "ff3f137f54ffc94f3823341d3b6688b13fb5ce25b286c6e67d66368e37f89683"
	I0311 12:56:27.172758 1130717 cri.go:89] found id: ""
	I0311 12:56:27.172767 1130717 logs.go:276] 1 containers: [ff3f137f54ffc94f3823341d3b6688b13fb5ce25b286c6e67d66368e37f89683]
	I0311 12:56:27.172824 1130717 ssh_runner.go:195] Run: which crictl
	I0311 12:56:27.176380 1130717 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 12:56:27.176498 1130717 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 12:56:27.225990 1130717 cri.go:89] found id: "e01b8fd7f7b44381d8be2524ecaf16bb6c0c7acbc042413dd2b6cf0bd5d605dd"
	I0311 12:56:27.226025 1130717 cri.go:89] found id: ""
	I0311 12:56:27.226034 1130717 logs.go:276] 1 containers: [e01b8fd7f7b44381d8be2524ecaf16bb6c0c7acbc042413dd2b6cf0bd5d605dd]
	I0311 12:56:27.226110 1130717 ssh_runner.go:195] Run: which crictl
	I0311 12:56:27.229833 1130717 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 12:56:27.229905 1130717 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 12:56:27.256529 1130717 kapi.go:107] duration metric: took 1m25.009569333s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0311 12:56:27.285549 1130717 cri.go:89] found id: "cffd732056d93edde7c9216cb58e49cd860621dbe808e50305aab14597394959"
	I0311 12:56:27.285612 1130717 cri.go:89] found id: ""
	I0311 12:56:27.285632 1130717 logs.go:276] 1 containers: [cffd732056d93edde7c9216cb58e49cd860621dbe808e50305aab14597394959]
	I0311 12:56:27.285734 1130717 ssh_runner.go:195] Run: which crictl
	I0311 12:56:27.289143 1130717 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 12:56:27.289267 1130717 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 12:56:27.332041 1130717 cri.go:89] found id: "7b43cf1b2e4e4cd8f4ce8b703754b5049b7213d716489e32fc74f7290f8fcf77"
	I0311 12:56:27.332112 1130717 cri.go:89] found id: ""
	I0311 12:56:27.332134 1130717 logs.go:276] 1 containers: [7b43cf1b2e4e4cd8f4ce8b703754b5049b7213d716489e32fc74f7290f8fcf77]
	I0311 12:56:27.332222 1130717 ssh_runner.go:195] Run: which crictl
	I0311 12:56:27.335750 1130717 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 12:56:27.335818 1130717 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 12:56:27.376406 1130717 cri.go:89] found id: "5135080bbf38c9801a52764e164d73158f65a9e454b529bd13d35fc51bf3156c"
	I0311 12:56:27.376470 1130717 cri.go:89] found id: ""
	I0311 12:56:27.376491 1130717 logs.go:276] 1 containers: [5135080bbf38c9801a52764e164d73158f65a9e454b529bd13d35fc51bf3156c]
	I0311 12:56:27.376596 1130717 ssh_runner.go:195] Run: which crictl
	I0311 12:56:27.380625 1130717 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 12:56:27.380724 1130717 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 12:56:27.431253 1130717 cri.go:89] found id: "ade1f425f1126c92469833140472a84a44830c44c241e54bb235e10ed027c387"
	I0311 12:56:27.431277 1130717 cri.go:89] found id: ""
	I0311 12:56:27.431286 1130717 logs.go:276] 1 containers: [ade1f425f1126c92469833140472a84a44830c44c241e54bb235e10ed027c387]
	I0311 12:56:27.431339 1130717 ssh_runner.go:195] Run: which crictl
	I0311 12:56:27.435322 1130717 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 12:56:27.435402 1130717 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 12:56:27.455582 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:27.474743 1130717 cri.go:89] found id: "1dda8ce7ac24dd29d25623e8e847e1f1c433ec95d8d2ce6a22b1e33456eed310"
	I0311 12:56:27.474806 1130717 cri.go:89] found id: ""
	I0311 12:56:27.474822 1130717 logs.go:276] 1 containers: [1dda8ce7ac24dd29d25623e8e847e1f1c433ec95d8d2ce6a22b1e33456eed310]
	I0311 12:56:27.474888 1130717 ssh_runner.go:195] Run: which crictl
	I0311 12:56:27.478688 1130717 logs.go:123] Gathering logs for etcd [e01b8fd7f7b44381d8be2524ecaf16bb6c0c7acbc042413dd2b6cf0bd5d605dd] ...
	I0311 12:56:27.478715 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e01b8fd7f7b44381d8be2524ecaf16bb6c0c7acbc042413dd2b6cf0bd5d605dd"
	I0311 12:56:27.528556 1130717 logs.go:123] Gathering logs for kube-scheduler [7b43cf1b2e4e4cd8f4ce8b703754b5049b7213d716489e32fc74f7290f8fcf77] ...
	I0311 12:56:27.528597 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b43cf1b2e4e4cd8f4ce8b703754b5049b7213d716489e32fc74f7290f8fcf77"
	I0311 12:56:27.574944 1130717 logs.go:123] Gathering logs for container status ...
	I0311 12:56:27.574993 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 12:56:27.638934 1130717 logs.go:123] Gathering logs for kube-apiserver [ff3f137f54ffc94f3823341d3b6688b13fb5ce25b286c6e67d66368e37f89683] ...
	I0311 12:56:27.638968 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff3f137f54ffc94f3823341d3b6688b13fb5ce25b286c6e67d66368e37f89683"
	I0311 12:56:27.704515 1130717 logs.go:123] Gathering logs for dmesg ...
	I0311 12:56:27.704557 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 12:56:27.725664 1130717 logs.go:123] Gathering logs for describe nodes ...
	I0311 12:56:27.725733 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 12:56:27.864641 1130717 logs.go:123] Gathering logs for coredns [cffd732056d93edde7c9216cb58e49cd860621dbe808e50305aab14597394959] ...
	I0311 12:56:27.864673 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cffd732056d93edde7c9216cb58e49cd860621dbe808e50305aab14597394959"
	I0311 12:56:27.917541 1130717 logs.go:123] Gathering logs for kube-proxy [5135080bbf38c9801a52764e164d73158f65a9e454b529bd13d35fc51bf3156c] ...
	I0311 12:56:27.917572 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5135080bbf38c9801a52764e164d73158f65a9e454b529bd13d35fc51bf3156c"
	I0311 12:56:27.955621 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:27.958128 1130717 logs.go:123] Gathering logs for kube-controller-manager [ade1f425f1126c92469833140472a84a44830c44c241e54bb235e10ed027c387] ...
	I0311 12:56:27.958153 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ade1f425f1126c92469833140472a84a44830c44c241e54bb235e10ed027c387"
	I0311 12:56:28.029844 1130717 logs.go:123] Gathering logs for kindnet [1dda8ce7ac24dd29d25623e8e847e1f1c433ec95d8d2ce6a22b1e33456eed310] ...
	I0311 12:56:28.029922 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1dda8ce7ac24dd29d25623e8e847e1f1c433ec95d8d2ce6a22b1e33456eed310"
	I0311 12:56:28.092227 1130717 logs.go:123] Gathering logs for CRI-O ...
	I0311 12:56:28.092253 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 12:56:28.186322 1130717 logs.go:123] Gathering logs for kubelet ...
	I0311 12:56:28.186358 1130717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0311 12:56:28.224567 1130717 logs.go:138] Found kubelet problem: Mar 11 12:54:54 addons-127043 kubelet[1487]: W0311 12:54:54.202216    1487 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-127043' and this object
	W0311 12:56:28.224818 1130717 logs.go:138] Found kubelet problem: Mar 11 12:54:54 addons-127043 kubelet[1487]: E0311 12:54:54.202275    1487 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-127043' and this object
	W0311 12:56:28.249394 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: W0311 12:55:31.001266    1487 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-127043" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-127043' and this object
	W0311 12:56:28.249586 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: W0311 12:55:31.001301    1487 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-127043' and this object
	W0311 12:56:28.249774 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: E0311 12:55:31.001319    1487 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-127043" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-127043' and this object
	W0311 12:56:28.249974 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: E0311 12:55:31.001333    1487 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-127043' and this object
	W0311 12:56:28.250161 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: W0311 12:55:31.001861    1487 reflector.go:535] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-127043' and this object
	W0311 12:56:28.250367 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: E0311 12:55:31.001899    1487 reflector.go:147] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-127043' and this object
	W0311 12:56:28.250535 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: W0311 12:55:31.001951    1487 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-127043" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-127043' and this object
	W0311 12:56:28.250718 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: E0311 12:55:31.001962    1487 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-127043" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-127043' and this object
	W0311 12:56:28.250904 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: W0311 12:55:31.002237    1487 reflector.go:535] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-127043' and this object
	W0311 12:56:28.251109 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: E0311 12:55:31.002256    1487 reflector.go:147] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-127043' and this object
	W0311 12:56:28.252481 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: W0311 12:55:31.012200    1487 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-127043' and this object
	W0311 12:56:28.252682 1130717 logs.go:138] Found kubelet problem: Mar 11 12:55:31 addons-127043 kubelet[1487]: E0311 12:55:31.012237    1487 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-127043' and this object
	I0311 12:56:28.291778 1130717 out.go:304] Setting ErrFile to fd 2...
	I0311 12:56:28.291803 1130717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0311 12:56:28.291866 1130717 out.go:239] X Problems detected in kubelet:
	W0311 12:56:28.291880 1130717 out.go:239]   Mar 11 12:55:31 addons-127043 kubelet[1487]: E0311 12:55:31.001962    1487 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-127043" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-127043' and this object
	W0311 12:56:28.291889 1130717 out.go:239]   Mar 11 12:55:31 addons-127043 kubelet[1487]: W0311 12:55:31.002237    1487 reflector.go:535] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-127043' and this object
	W0311 12:56:28.291902 1130717 out.go:239]   Mar 11 12:55:31 addons-127043 kubelet[1487]: E0311 12:55:31.002256    1487 reflector.go:147] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-127043' and this object
	W0311 12:56:28.291909 1130717 out.go:239]   Mar 11 12:55:31 addons-127043 kubelet[1487]: W0311 12:55:31.012200    1487 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-127043' and this object
	W0311 12:56:28.291916 1130717 out.go:239]   Mar 11 12:55:31 addons-127043 kubelet[1487]: E0311 12:55:31.012237    1487 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-127043" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-127043' and this object
	I0311 12:56:28.291927 1130717 out.go:304] Setting ErrFile to fd 2...
	I0311 12:56:28.291932 1130717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 12:56:28.454808 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:28.954279 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:29.455323 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:29.955224 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:30.455287 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:30.954897 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:31.454921 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:31.954683 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:32.454996 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:32.954724 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:33.454496 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:33.954371 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:34.455447 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:34.954584 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:35.459764 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:35.954746 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:36.455809 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:36.954874 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:37.455145 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:37.955278 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:38.307037 1130717 system_pods.go:59] 18 kube-system pods found
	I0311 12:56:38.307114 1130717 system_pods.go:61] "coredns-5dd5756b68-db48g" [dcf60810-a256-449d-bf0b-be44b352004e] Running
	I0311 12:56:38.307134 1130717 system_pods.go:61] "csi-hostpath-attacher-0" [0c879c86-9a15-4c7b-b59d-5a2a1d26fc50] Running
	I0311 12:56:38.307154 1130717 system_pods.go:61] "csi-hostpath-resizer-0" [ca878237-d62a-4f27-aee1-4992fe2a7c4b] Running
	I0311 12:56:38.307190 1130717 system_pods.go:61] "csi-hostpathplugin-582mm" [f0ee4869-31e3-4cd4-802e-a506a0e22519] Running
	I0311 12:56:38.307214 1130717 system_pods.go:61] "etcd-addons-127043" [965075f8-4ffa-4172-b42a-94451639ede5] Running
	I0311 12:56:38.307236 1130717 system_pods.go:61] "kindnet-sdf6k" [9fbdc5dd-7b92-436a-bd2e-4fd0de544ff1] Running
	I0311 12:56:38.307257 1130717 system_pods.go:61] "kube-apiserver-addons-127043" [45af7c7d-2e4f-492a-a992-78e47957658f] Running
	I0311 12:56:38.307277 1130717 system_pods.go:61] "kube-controller-manager-addons-127043" [482d2d4b-8187-4ff3-8668-17f302e1ddd7] Running
	I0311 12:56:38.307316 1130717 system_pods.go:61] "kube-ingress-dns-minikube" [4c63510e-0809-4de4-aba1-ba6ebb45a134] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0311 12:56:38.307338 1130717 system_pods.go:61] "kube-proxy-gzphw" [3b321be6-7e45-4613-a5b4-b49ec6de36e8] Running
	I0311 12:56:38.307361 1130717 system_pods.go:61] "kube-scheduler-addons-127043" [74832156-ace6-4324-97dd-14d2cd92d0a7] Running
	I0311 12:56:38.307392 1130717 system_pods.go:61] "metrics-server-69cf46c98-hlhsg" [6f000c7f-3777-4a03-a0f6-5728160d4000] Running
	I0311 12:56:38.307419 1130717 system_pods.go:61] "nvidia-device-plugin-daemonset-kh579" [946f3486-8140-43ba-8eb4-b1e76771c071] Running
	I0311 12:56:38.307441 1130717 system_pods.go:61] "registry-7wnxf" [08602d05-9b4b-4fab-ae06-5d18c7d6971b] Running
	I0311 12:56:38.307460 1130717 system_pods.go:61] "registry-proxy-ckqms" [fbeb4621-6db7-4a4a-b294-12b96612b41d] Running
	I0311 12:56:38.307480 1130717 system_pods.go:61] "snapshot-controller-58dbcc7b99-rmhv9" [227c655c-e169-4873-91e1-46bd44055ddf] Running
	I0311 12:56:38.307513 1130717 system_pods.go:61] "snapshot-controller-58dbcc7b99-xwz9j" [e7d97919-f278-4455-baa1-c7e5a1d85fad] Running
	I0311 12:56:38.307532 1130717 system_pods.go:61] "storage-provisioner" [4eab3d28-5333-4e60-b67e-f8644afd7acf] Running
	I0311 12:56:38.307551 1130717 system_pods.go:74] duration metric: took 11.200952329s to wait for pod list to return data ...
	I0311 12:56:38.307574 1130717 default_sa.go:34] waiting for default service account to be created ...
	I0311 12:56:38.313118 1130717 default_sa.go:45] found service account: "default"
	I0311 12:56:38.313140 1130717 default_sa.go:55] duration metric: took 5.533908ms for default service account to be created ...
	I0311 12:56:38.313151 1130717 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 12:56:38.331816 1130717 system_pods.go:86] 18 kube-system pods found
	I0311 12:56:38.331909 1130717 system_pods.go:89] "coredns-5dd5756b68-db48g" [dcf60810-a256-449d-bf0b-be44b352004e] Running
	I0311 12:56:38.331931 1130717 system_pods.go:89] "csi-hostpath-attacher-0" [0c879c86-9a15-4c7b-b59d-5a2a1d26fc50] Running
	I0311 12:56:38.331951 1130717 system_pods.go:89] "csi-hostpath-resizer-0" [ca878237-d62a-4f27-aee1-4992fe2a7c4b] Running
	I0311 12:56:38.331982 1130717 system_pods.go:89] "csi-hostpathplugin-582mm" [f0ee4869-31e3-4cd4-802e-a506a0e22519] Running
	I0311 12:56:38.332008 1130717 system_pods.go:89] "etcd-addons-127043" [965075f8-4ffa-4172-b42a-94451639ede5] Running
	I0311 12:56:38.332028 1130717 system_pods.go:89] "kindnet-sdf6k" [9fbdc5dd-7b92-436a-bd2e-4fd0de544ff1] Running
	I0311 12:56:38.332063 1130717 system_pods.go:89] "kube-apiserver-addons-127043" [45af7c7d-2e4f-492a-a992-78e47957658f] Running
	I0311 12:56:38.332089 1130717 system_pods.go:89] "kube-controller-manager-addons-127043" [482d2d4b-8187-4ff3-8668-17f302e1ddd7] Running
	I0311 12:56:38.332116 1130717 system_pods.go:89] "kube-ingress-dns-minikube" [4c63510e-0809-4de4-aba1-ba6ebb45a134] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0311 12:56:38.332149 1130717 system_pods.go:89] "kube-proxy-gzphw" [3b321be6-7e45-4613-a5b4-b49ec6de36e8] Running
	I0311 12:56:38.332174 1130717 system_pods.go:89] "kube-scheduler-addons-127043" [74832156-ace6-4324-97dd-14d2cd92d0a7] Running
	I0311 12:56:38.332194 1130717 system_pods.go:89] "metrics-server-69cf46c98-hlhsg" [6f000c7f-3777-4a03-a0f6-5728160d4000] Running
	I0311 12:56:38.332229 1130717 system_pods.go:89] "nvidia-device-plugin-daemonset-kh579" [946f3486-8140-43ba-8eb4-b1e76771c071] Running
	I0311 12:56:38.332252 1130717 system_pods.go:89] "registry-7wnxf" [08602d05-9b4b-4fab-ae06-5d18c7d6971b] Running
	I0311 12:56:38.332273 1130717 system_pods.go:89] "registry-proxy-ckqms" [fbeb4621-6db7-4a4a-b294-12b96612b41d] Running
	I0311 12:56:38.332307 1130717 system_pods.go:89] "snapshot-controller-58dbcc7b99-rmhv9" [227c655c-e169-4873-91e1-46bd44055ddf] Running
	I0311 12:56:38.332330 1130717 system_pods.go:89] "snapshot-controller-58dbcc7b99-xwz9j" [e7d97919-f278-4455-baa1-c7e5a1d85fad] Running
	I0311 12:56:38.332349 1130717 system_pods.go:89] "storage-provisioner" [4eab3d28-5333-4e60-b67e-f8644afd7acf] Running
	I0311 12:56:38.332386 1130717 system_pods.go:126] duration metric: took 19.228366ms to wait for k8s-apps to be running ...
	I0311 12:56:38.332411 1130717 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 12:56:38.332503 1130717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 12:56:38.353894 1130717 system_svc.go:56] duration metric: took 21.474422ms WaitForService to wait for kubelet
	I0311 12:56:38.353972 1130717 kubeadm.go:576] duration metric: took 1m43.898871054s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 12:56:38.354008 1130717 node_conditions.go:102] verifying NodePressure condition ...
	I0311 12:56:38.357250 1130717 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0311 12:56:38.357333 1130717 node_conditions.go:123] node cpu capacity is 2
	I0311 12:56:38.357418 1130717 node_conditions.go:105] duration metric: took 3.390561ms to run NodePressure ...
	I0311 12:56:38.357451 1130717 start.go:240] waiting for startup goroutines ...
	I0311 12:56:38.455348 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:38.955583 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:39.454479 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:39.954375 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:40.454796 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:40.955262 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:41.456618 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:41.956392 1130717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 12:56:42.455368 1130717 kapi.go:107] duration metric: took 1m40.505379293s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0311 12:56:42.458247 1130717 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0311 12:56:42.460419 1130717 addons.go:505] duration metric: took 1m48.005032966s for enable addons: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I0311 12:56:42.460471 1130717 start.go:245] waiting for cluster config update ...
	I0311 12:56:42.460492 1130717 start.go:254] writing updated cluster config ...
	I0311 12:56:42.460802 1130717 ssh_runner.go:195] Run: rm -f paused
	I0311 12:56:42.967805 1130717 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0311 12:56:42.980705 1130717 out.go:177] * Done! kubectl is now configured to use "addons-127043" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 11 13:00:51 addons-127043 crio[912]: time="2024-03-11 13:00:51.736911021Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=c1b87b3a-2a1d-446d-974b-283cd57fd044 name=/runtime.v1.ImageService/ImageStatus
	Mar 11 13:00:51 addons-127043 crio[912]: time="2024-03-11 13:00:51.737099201Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=c1b87b3a-2a1d-446d-974b-283cd57fd044 name=/runtime.v1.ImageService/ImageStatus
	Mar 11 13:00:51 addons-127043 crio[912]: time="2024-03-11 13:00:51.738135982Z" level=info msg="Creating container: default/hello-world-app-5d77478584-6cj8v/hello-world-app" id=3cf84867-a84e-4b79-99f5-f2d809ac6744 name=/runtime.v1.RuntimeService/CreateContainer
	Mar 11 13:00:51 addons-127043 crio[912]: time="2024-03-11 13:00:51.738223447Z" level=warning msg="Allowed annotations are specified for workload []"
	Mar 11 13:00:51 addons-127043 crio[912]: time="2024-03-11 13:00:51.838977832Z" level=info msg="Created container 66db594ad5f1f2130a1c50eb31fedcf9d83ea244286c14c9c5f7217fb20af55a: default/hello-world-app-5d77478584-6cj8v/hello-world-app" id=3cf84867-a84e-4b79-99f5-f2d809ac6744 name=/runtime.v1.RuntimeService/CreateContainer
	Mar 11 13:00:51 addons-127043 crio[912]: time="2024-03-11 13:00:51.841038702Z" level=info msg="Starting container: 66db594ad5f1f2130a1c50eb31fedcf9d83ea244286c14c9c5f7217fb20af55a" id=2c7abdda-551e-4d8a-b176-33960fd59936 name=/runtime.v1.RuntimeService/StartContainer
	Mar 11 13:00:51 addons-127043 crio[912]: time="2024-03-11 13:00:51.852470056Z" level=info msg="Started container" PID=8535 containerID=66db594ad5f1f2130a1c50eb31fedcf9d83ea244286c14c9c5f7217fb20af55a description=default/hello-world-app-5d77478584-6cj8v/hello-world-app id=2c7abdda-551e-4d8a-b176-33960fd59936 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dc2889fa7c17233b8f907f96be9fda691418804172ca647be8efd7b605c111e0
	Mar 11 13:00:51 addons-127043 conmon[8513]: conmon 66db594ad5f1f2130a1c <ninfo>: container 8535 exited with status 1
	Mar 11 13:00:52 addons-127043 crio[912]: time="2024-03-11 13:00:52.092026590Z" level=info msg="Removing container: 36a471bca4c153cf2386a2cb6736f1b9a224eace96c49bebde1f3224d437ffbd" id=5b9222a7-839c-4e11-9fe8-5fba3607cf06 name=/runtime.v1.RuntimeService/RemoveContainer
	Mar 11 13:00:52 addons-127043 crio[912]: time="2024-03-11 13:00:52.117640400Z" level=info msg="Removed container 36a471bca4c153cf2386a2cb6736f1b9a224eace96c49bebde1f3224d437ffbd: default/hello-world-app-5d77478584-6cj8v/hello-world-app" id=5b9222a7-839c-4e11-9fe8-5fba3607cf06 name=/runtime.v1.RuntimeService/RemoveContainer
	Mar 11 13:00:52 addons-127043 crio[912]: time="2024-03-11 13:00:52.773257657Z" level=info msg="Stopping container: 3e91569b1aab393a39b72f7273fcbcf364bdc6cb4cdb987e346a5f7622a5d496 (timeout: 2s)" id=afffdfa8-166a-4c74-8a64-568a1e94edf4 name=/runtime.v1.RuntimeService/StopContainer
	Mar 11 13:00:54 addons-127043 crio[912]: time="2024-03-11 13:00:54.779471389Z" level=warning msg="Stopping container 3e91569b1aab393a39b72f7273fcbcf364bdc6cb4cdb987e346a5f7622a5d496 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=afffdfa8-166a-4c74-8a64-568a1e94edf4 name=/runtime.v1.RuntimeService/StopContainer
	Mar 11 13:00:54 addons-127043 conmon[5718]: conmon 3e91569b1aab393a39b7 <ninfo>: container 5729 exited with status 137
	Mar 11 13:00:54 addons-127043 crio[912]: time="2024-03-11 13:00:54.914562494Z" level=info msg="Stopped container 3e91569b1aab393a39b72f7273fcbcf364bdc6cb4cdb987e346a5f7622a5d496: ingress-nginx/ingress-nginx-controller-76dc478dd8-qdd22/controller" id=afffdfa8-166a-4c74-8a64-568a1e94edf4 name=/runtime.v1.RuntimeService/StopContainer
	Mar 11 13:00:54 addons-127043 crio[912]: time="2024-03-11 13:00:54.915057152Z" level=info msg="Stopping pod sandbox: da46b7834bcb6c2e9f941efba8d6208af846a9e498319ee114205861c596a335" id=a647fe4e-81b3-41ac-8f66-7e024bdcd1e0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Mar 11 13:00:54 addons-127043 crio[912]: time="2024-03-11 13:00:54.918505302Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-TZAHAYGI2G4PM64O - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-INHXOM37XN5TWU5S - [0:0]\n-X KUBE-HP-INHXOM37XN5TWU5S\n-X KUBE-HP-TZAHAYGI2G4PM64O\nCOMMIT\n"
	Mar 11 13:00:54 addons-127043 crio[912]: time="2024-03-11 13:00:54.922138095Z" level=info msg="Closing host port tcp:80"
	Mar 11 13:00:54 addons-127043 crio[912]: time="2024-03-11 13:00:54.922191452Z" level=info msg="Closing host port tcp:443"
	Mar 11 13:00:54 addons-127043 crio[912]: time="2024-03-11 13:00:54.923544812Z" level=info msg="Host port tcp:80 does not have an open socket"
	Mar 11 13:00:54 addons-127043 crio[912]: time="2024-03-11 13:00:54.923589184Z" level=info msg="Host port tcp:443 does not have an open socket"
	Mar 11 13:00:54 addons-127043 crio[912]: time="2024-03-11 13:00:54.923785076Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-76dc478dd8-qdd22 Namespace:ingress-nginx ID:da46b7834bcb6c2e9f941efba8d6208af846a9e498319ee114205861c596a335 UID:5d3055dd-e93d-40a8-bee8-71c6e04b37f1 NetNS:/var/run/netns/6da758f0-42cf-430c-8e77-fdcc1595e979 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Mar 11 13:00:54 addons-127043 crio[912]: time="2024-03-11 13:00:54.923945811Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-76dc478dd8-qdd22 from CNI network \"kindnet\" (type=ptp)"
	Mar 11 13:00:54 addons-127043 crio[912]: time="2024-03-11 13:00:54.951140807Z" level=info msg="Stopped pod sandbox: da46b7834bcb6c2e9f941efba8d6208af846a9e498319ee114205861c596a335" id=a647fe4e-81b3-41ac-8f66-7e024bdcd1e0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Mar 11 13:00:55 addons-127043 crio[912]: time="2024-03-11 13:00:55.102988824Z" level=info msg="Removing container: 3e91569b1aab393a39b72f7273fcbcf364bdc6cb4cdb987e346a5f7622a5d496" id=566d3d68-59b7-463e-8d83-ddee0ef36011 name=/runtime.v1.RuntimeService/RemoveContainer
	Mar 11 13:00:55 addons-127043 crio[912]: time="2024-03-11 13:00:55.120659149Z" level=info msg="Removed container 3e91569b1aab393a39b72f7273fcbcf364bdc6cb4cdb987e346a5f7622a5d496: ingress-nginx/ingress-nginx-controller-76dc478dd8-qdd22/controller" id=566d3d68-59b7-463e-8d83-ddee0ef36011 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	66db594ad5f1f       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                             8 seconds ago       Exited              hello-world-app           2                   dc2889fa7c172       hello-world-app-5d77478584-6cj8v
	bac7e6ec1b40f       docker.io/library/nginx@sha256:34aa0a372d3220dc0448131f809c72d8085f79bdec8058ad6970fc034a395674                              2 minutes ago       Running             nginx                     0                   e63fea354cedd       nginx
	5553361f2e3d2       ghcr.io/headlamp-k8s/headlamp@sha256:4768f9247f9e418fc4aa4e617fa993ada21a9d5ca013aeb62a6b5f70d684a107                        2 minutes ago       Running             headlamp                  0                   a562cf55adaef       headlamp-5485c556b-hwkqk
	8b821aa40ea5b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32                 4 minutes ago       Running             gcp-auth                  0                   ac70e917db849       gcp-auth-5f6b4f85fd-9h7wv
	3d089c9be6a8e       1a024e390dd050d584b5c93bb30810e8be713157ab713b0d77a7af14dfe88c1e                                                             4 minutes ago       Exited              patch                     1                   1a22295239547       ingress-nginx-admission-patch-dxmjx
	2ffd0becb14e7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0b1098ef00acee905f9736f98dd151af0a38d0fef0ccf9fb5ad189b20933e5f8   4 minutes ago       Exited              create                    0                   2482cc2dfad1d       ingress-nginx-admission-create-f7shn
	d5a2aaf856fda       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              5 minutes ago       Running             yakd                      0                   0163edad4a0c7       yakd-dashboard-9947fc6bf-fcvqn
	cddd22dfa41f6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             5 minutes ago       Running             storage-provisioner       0                   20d5f9995e860       storage-provisioner
	cffd732056d93       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                             5 minutes ago       Running             coredns                   0                   15fc4999448db       coredns-5dd5756b68-db48g
	1dda8ce7ac24d       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988                           5 minutes ago       Running             kindnet-cni               0                   24fd64d4168d0       kindnet-sdf6k
	5135080bbf38c       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                                             6 minutes ago       Running             kube-proxy                0                   05462140297ec       kube-proxy-gzphw
	e01b8fd7f7b44       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                             6 minutes ago       Running             etcd                      0                   46e27eefd0976       etcd-addons-127043
	ade1f425f1126       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                                             6 minutes ago       Running             kube-controller-manager   0                   cb55dbd4701c8       kube-controller-manager-addons-127043
	ff3f137f54ffc       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                                             6 minutes ago       Running             kube-apiserver            0                   5ea879a9ed02d       kube-apiserver-addons-127043
	7b43cf1b2e4e4       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                                             6 minutes ago       Running             kube-scheduler            0                   6a74108d46200       kube-scheduler-addons-127043
	
	
	==> coredns [cffd732056d93edde7c9216cb58e49cd860621dbe808e50305aab14597394959] <==
	[INFO] 10.244.0.20:55728 - 40226 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000095718s
	[INFO] 10.244.0.20:50461 - 63197 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003459908s
	[INFO] 10.244.0.20:55728 - 46961 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002520592s
	[INFO] 10.244.0.20:50461 - 43214 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002193027s
	[INFO] 10.244.0.20:50461 - 57263 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000117831s
	[INFO] 10.244.0.20:55728 - 21329 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.008473859s
	[INFO] 10.244.0.20:55728 - 29946 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000065385s
	[INFO] 10.244.0.20:34036 - 34979 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000101757s
	[INFO] 10.244.0.20:48484 - 42512 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000131024s
	[INFO] 10.244.0.20:48484 - 1815 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000120407s
	[INFO] 10.244.0.20:34036 - 13070 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000052265s
	[INFO] 10.244.0.20:48484 - 62984 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000065524s
	[INFO] 10.244.0.20:34036 - 34382 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000037907s
	[INFO] 10.244.0.20:48484 - 31145 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000057607s
	[INFO] 10.244.0.20:34036 - 25931 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000038399s
	[INFO] 10.244.0.20:48484 - 55099 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000057672s
	[INFO] 10.244.0.20:34036 - 7977 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038465s
	[INFO] 10.244.0.20:34036 - 62848 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000096572s
	[INFO] 10.244.0.20:48484 - 14607 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000110323s
	[INFO] 10.244.0.20:34036 - 54896 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001725781s
	[INFO] 10.244.0.20:48484 - 5243 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002251545s
	[INFO] 10.244.0.20:34036 - 25592 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001750043s
	[INFO] 10.244.0.20:48484 - 26679 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000852704s
	[INFO] 10.244.0.20:34036 - 51857 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000060996s
	[INFO] 10.244.0.20:48484 - 5768 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000039376s
	
	
	==> describe nodes <==
	Name:               addons-127043
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-127043
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f02234404c3608d31811fa9c1f2f7d976b3e563
	                    minikube.k8s.io/name=addons-127043
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_11T12_54_41_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-127043
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 12:54:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-127043
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 13:00:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 13:00:49 +0000   Mon, 11 Mar 2024 12:54:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 13:00:49 +0000   Mon, 11 Mar 2024 12:54:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 13:00:49 +0000   Mon, 11 Mar 2024 12:54:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 13:00:49 +0000   Mon, 11 Mar 2024 12:55:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-127043
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	System Info:
	  Machine ID:                 021bfeff0aa845b5ac0a82818ffe160e
	  System UUID:                9a8d79a8-247e-4166-a3f9-38a66accdf84
	  Boot ID:                    ac1cf86e-1c30-4f1a-912c-77e6f73db4d1
	  Kernel Version:             5.15.0-1055-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-6cj8v         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  gcp-auth                    gcp-auth-5f6b4f85fd-9h7wv                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  headlamp                    headlamp-5485c556b-hwkqk                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m2s
	  kube-system                 coredns-5dd5756b68-db48g                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     6m6s
	  kube-system                 etcd-addons-127043                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         6m19s
	  kube-system                 kindnet-sdf6k                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m6s
	  kube-system                 kube-apiserver-addons-127043             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	  kube-system                 kube-controller-manager-addons-127043    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	  kube-system                 kube-proxy-gzphw                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 kube-scheduler-addons-127043             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m59s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-fcvqn           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     5m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m59s  kube-proxy       
	  Normal  Starting                 6m20s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m20s  kubelet          Node addons-127043 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m20s  kubelet          Node addons-127043 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m20s  kubelet          Node addons-127043 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m7s   node-controller  Node addons-127043 event: Registered Node addons-127043 in Controller
	  Normal  NodeReady                5m30s  kubelet          Node addons-127043 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001037] FS-Cache: O-key=[8] 'fc70ed0000000000'
	[  +0.000693] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000977] FS-Cache: N-cookie d=00000000b3298906{9p.inode} n=000000004721427e
	[  +0.001039] FS-Cache: N-key=[8] 'fc70ed0000000000'
	[  +0.002418] FS-Cache: Duplicate cookie detected
	[  +0.000675] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.000948] FS-Cache: O-cookie d=00000000b3298906{9p.inode} n=00000000d2deb342
	[  +0.001140] FS-Cache: O-key=[8] 'fc70ed0000000000'
	[  +0.000699] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000994] FS-Cache: N-cookie d=00000000b3298906{9p.inode} n=00000000a873158c
	[  +0.001048] FS-Cache: N-key=[8] 'fc70ed0000000000'
	[  +2.435024] FS-Cache: Duplicate cookie detected
	[  +0.000788] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.000989] FS-Cache: O-cookie d=00000000b3298906{9p.inode} n=0000000019e50f85
	[  +0.001062] FS-Cache: O-key=[8] 'fb70ed0000000000'
	[  +0.000807] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000963] FS-Cache: N-cookie d=00000000b3298906{9p.inode} n=00000000848959be
	[  +0.001117] FS-Cache: N-key=[8] 'fb70ed0000000000'
	[Mar11 11:47] FS-Cache: Duplicate cookie detected
	[  +0.000772] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.001035] FS-Cache: O-cookie d=00000000b3298906{9p.inode} n=00000000b547a8fe
	[  +0.001077] FS-Cache: O-key=[8] '0171ed0000000000'
	[  +0.000756] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000979] FS-Cache: N-cookie d=00000000b3298906{9p.inode} n=000000004721427e
	[  +0.001067] FS-Cache: N-key=[8] '0171ed0000000000'
	
	
	==> etcd [e01b8fd7f7b44381d8be2524ecaf16bb6c0c7acbc042413dd2b6cf0bd5d605dd] <==
	{"level":"info","ts":"2024-03-11T12:54:34.330592Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-11T12:54:34.330646Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-11T12:54:58.31859Z","caller":"traceutil/trace.go:171","msg":"trace[896118072] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"237.672916ms","start":"2024-03-11T12:54:58.00665Z","end":"2024-03-11T12:54:58.244323Z","steps":["trace[896118072] 'process raft request'  (duration: 214.722447ms)","trace[896118072] 'compare'  (duration: 22.433526ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-11T12:54:58.346235Z","caller":"traceutil/trace.go:171","msg":"trace[907507540] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"116.849583ms","start":"2024-03-11T12:54:58.221507Z","end":"2024-03-11T12:54:58.338357Z","steps":["trace[907507540] 'process raft request'  (duration: 77.814049ms)","trace[907507540] 'attach lease to kv pair' {req_type:put; key:/registry/storageclasses/standard; req_size:974; } (duration: 35.27877ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-11T12:54:58.335101Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.634901ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-11T12:54:58.354499Z","caller":"traceutil/trace.go:171","msg":"trace[1808965638] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:381; }","duration":"138.062413ms","start":"2024-03-11T12:54:58.216415Z","end":"2024-03-11T12:54:58.354478Z","steps":["trace[1808965638] 'agreement among raft nodes before linearized reading'  (duration: 28.048804ms)","trace[1808965638] 'range keys from in-memory index tree'  (duration: 90.573551ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-11T12:54:58.3937Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-11T12:54:58.006622Z","time spent":"328.51298ms","remote":"127.0.0.1:42838","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4396,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-gzphw\" mod_revision:331 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-gzphw\" value_size:4345 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-proxy-gzphw\" > >"}
	{"level":"warn","ts":"2024-03-11T12:54:58.441549Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.234292ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-127043\" ","response":"range_response_count:1 size:5743"}
	{"level":"info","ts":"2024-03-11T12:54:58.44161Z","caller":"traceutil/trace.go:171","msg":"trace[1896476663] range","detail":"{range_begin:/registry/minions/addons-127043; range_end:; response_count:1; response_revision:382; }","duration":"198.306601ms","start":"2024-03-11T12:54:58.24329Z","end":"2024-03-11T12:54:58.441597Z","steps":["trace[1896476663] 'agreement among raft nodes before linearized reading'  (duration: 112.665727ms)","trace[1896476663] 'range keys from in-memory index tree'  (duration: 85.544556ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-11T12:54:58.441785Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.823176ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-11T12:54:58.441813Z","caller":"traceutil/trace.go:171","msg":"trace[127892280] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:382; }","duration":"197.850785ms","start":"2024-03-11T12:54:58.243955Z","end":"2024-03-11T12:54:58.441806Z","steps":["trace[127892280] 'agreement among raft nodes before linearized reading'  (duration: 111.972043ms)","trace[127892280] 'range keys from in-memory index tree'  (duration: 85.82468ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-11T12:54:58.442036Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.403956ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-11T12:54:58.442065Z","caller":"traceutil/trace.go:171","msg":"trace[1371347812] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:0; response_revision:382; }","duration":"220.431614ms","start":"2024-03-11T12:54:58.221625Z","end":"2024-03-11T12:54:58.442056Z","steps":["trace[1371347812] 'agreement among raft nodes before linearized reading'  (duration: 134.338075ms)","trace[1371347812] 'range keys from in-memory index tree'  (duration: 86.059694ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-11T12:54:58.480652Z","caller":"traceutil/trace.go:171","msg":"trace[573879124] transaction","detail":"{read_only:false; response_revision:384; number_of_response:1; }","duration":"142.32808ms","start":"2024-03-11T12:54:58.33831Z","end":"2024-03-11T12:54:58.480638Z","steps":["trace[573879124] 'process raft request'  (duration: 142.266813ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-11T12:54:58.482626Z","caller":"traceutil/trace.go:171","msg":"trace[1663767092] transaction","detail":"{read_only:false; response_revision:383; number_of_response:1; }","duration":"148.237611ms","start":"2024-03-11T12:54:58.334365Z","end":"2024-03-11T12:54:58.482603Z","steps":["trace[1663767092] 'process raft request'  (duration: 72.439479ms)","trace[1663767092] 'compare'  (duration: 34.630712ms)","trace[1663767092] 'get key's previous created_revision and leaseID' {req_type:put; key:/registry/leases/kube-system/apiserver-bbe23idoaj34fjyvwzo4g3tpgm; req_size:675; } (duration: 39.017229ms)"],"step_count":3}
	{"level":"info","ts":"2024-03-11T12:54:58.483465Z","caller":"traceutil/trace.go:171","msg":"trace[2091346481] linearizableReadLoop","detail":"{readStateIndex:394; appliedIndex:393; }","duration":"148.037689ms","start":"2024-03-11T12:54:58.335419Z","end":"2024-03-11T12:54:58.483457Z","steps":["trace[2091346481] 'read index received'  (duration: 57.99471ms)","trace[2091346481] 'applied index is now lower than readState.Index'  (duration: 90.042241ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-11T12:54:58.524109Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"174.410933ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-11T12:54:58.527221Z","caller":"traceutil/trace.go:171","msg":"trace[1858977291] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/; range_end:/registry/apiextensions.k8s.io/customresourcedefinitions0; response_count:0; response_revision:384; }","duration":"191.164428ms","start":"2024-03-11T12:54:58.334832Z","end":"2024-03-11T12:54:58.525997Z","steps":["trace[1858977291] 'agreement among raft nodes before linearized reading'  (duration: 171.831608ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T12:54:58.770328Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.655155ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-127043\" ","response":"range_response_count:1 size:5743"}
	{"level":"info","ts":"2024-03-11T12:54:58.771691Z","caller":"traceutil/trace.go:171","msg":"trace[1419378082] range","detail":"{range_begin:/registry/minions/addons-127043; range_end:; response_count:1; response_revision:387; }","duration":"170.015925ms","start":"2024-03-11T12:54:58.601644Z","end":"2024-03-11T12:54:58.77166Z","steps":["trace[1419378082] 'agreement among raft nodes before linearized reading'  (duration: 78.345933ms)","trace[1419378082] 'range keys from in-memory index tree'  (duration: 90.277124ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-11T12:54:58.914196Z","caller":"traceutil/trace.go:171","msg":"trace[1239791023] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:390; }","duration":"105.680219ms","start":"2024-03-11T12:54:58.808493Z","end":"2024-03-11T12:54:58.914173Z","steps":["trace[1239791023] 'agreement among raft nodes before linearized reading'  (duration: 98.640944ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T12:54:58.919746Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.610912ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-11T12:54:58.96967Z","caller":"traceutil/trace.go:171","msg":"trace[1093784134] range","detail":"{range_begin:/registry/services/specs/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:390; }","duration":"188.553646ms","start":"2024-03-11T12:54:58.781102Z","end":"2024-03-11T12:54:58.969655Z","steps":["trace[1093784134] 'agreement among raft nodes before linearized reading'  (duration: 138.57038ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-11T12:54:59.053323Z","caller":"traceutil/trace.go:171","msg":"trace[645895928] transaction","detail":"{read_only:false; response_revision:391; number_of_response:1; }","duration":"103.112152ms","start":"2024-03-11T12:54:58.950193Z","end":"2024-03-11T12:54:59.053305Z","steps":["trace[645895928] 'process raft request'  (duration: 67.908359ms)","trace[645895928] 'compare'  (duration: 34.808553ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-11T12:54:59.056279Z","caller":"traceutil/trace.go:171","msg":"trace[1246630939] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"106.005076ms","start":"2024-03-11T12:54:58.950259Z","end":"2024-03-11T12:54:59.056264Z","steps":["trace[1246630939] 'process raft request'  (duration: 102.936156ms)"],"step_count":1}
	
	
	==> gcp-auth [8b821aa40ea5b4dff9168840a922412f5d0ba30687dc11925f290da181286361] <==
	2024/03/11 12:56:19 GCP Auth Webhook started!
	2024/03/11 12:56:54 Ready to marshal response ...
	2024/03/11 12:56:54 Ready to write response ...
	2024/03/11 12:57:03 Ready to marshal response ...
	2024/03/11 12:57:03 Ready to write response ...
	2024/03/11 12:57:12 Ready to marshal response ...
	2024/03/11 12:57:12 Ready to write response ...
	2024/03/11 12:57:12 Ready to marshal response ...
	2024/03/11 12:57:12 Ready to write response ...
	2024/03/11 12:57:21 Ready to marshal response ...
	2024/03/11 12:57:21 Ready to write response ...
	2024/03/11 12:57:34 Ready to marshal response ...
	2024/03/11 12:57:34 Ready to write response ...
	2024/03/11 12:57:58 Ready to marshal response ...
	2024/03/11 12:57:58 Ready to write response ...
	2024/03/11 12:57:58 Ready to marshal response ...
	2024/03/11 12:57:58 Ready to write response ...
	2024/03/11 12:57:58 Ready to marshal response ...
	2024/03/11 12:57:58 Ready to write response ...
	2024/03/11 12:58:12 Ready to marshal response ...
	2024/03/11 12:58:12 Ready to write response ...
	2024/03/11 13:00:34 Ready to marshal response ...
	2024/03/11 13:00:34 Ready to write response ...
	
	
	==> kernel <==
	 13:01:00 up  4:43,  0 users,  load average: 1.08, 1.41, 2.45
	Linux addons-127043 5.15.0-1055-aws #60~20.04.1-Ubuntu SMP Thu Feb 22 15:54:21 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [1dda8ce7ac24dd29d25623e8e847e1f1c433ec95d8d2ce6a22b1e33456eed310] <==
	I0311 12:58:50.994454       1 main.go:227] handling current node
	I0311 12:59:01.002035       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 12:59:01.002072       1 main.go:227] handling current node
	I0311 12:59:11.017004       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 12:59:11.017039       1 main.go:227] handling current node
	I0311 12:59:21.124352       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 12:59:21.124396       1 main.go:227] handling current node
	I0311 12:59:31.137209       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 12:59:31.137238       1 main.go:227] handling current node
	I0311 12:59:41.142184       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 12:59:41.142212       1 main.go:227] handling current node
	I0311 12:59:51.152503       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 12:59:51.152531       1 main.go:227] handling current node
	I0311 13:00:01.192713       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 13:00:01.213532       1 main.go:227] handling current node
	I0311 13:00:11.225719       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 13:00:11.225749       1 main.go:227] handling current node
	I0311 13:00:21.231071       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 13:00:21.231095       1 main.go:227] handling current node
	I0311 13:00:31.244304       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 13:00:31.244333       1 main.go:227] handling current node
	I0311 13:00:41.258437       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 13:00:41.258468       1 main.go:227] handling current node
	I0311 13:00:51.270455       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 13:00:51.270483       1 main.go:227] handling current node
	
	
	==> kube-apiserver [ff3f137f54ffc94f3823341d3b6688b13fb5ce25b286c6e67d66368e37f89683] <==
	I0311 12:57:50.270201       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0311 12:57:50.281424       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 12:57:50.281579       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0311 12:57:50.291992       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 12:57:50.292046       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0311 12:57:50.355117       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 12:57:50.355540       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0311 12:57:50.383834       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 12:57:50.383981       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0311 12:57:50.384047       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 12:57:50.410992       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 12:57:50.411047       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0311 12:57:50.423326       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 12:57:50.423367       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0311 12:57:51.384154       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0311 12:57:51.423405       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0311 12:57:51.441544       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0311 12:57:58.474558       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.208.164"}
	I0311 12:58:12.444316       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0311 12:58:12.728273       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.128.122"}
	I0311 12:58:15.219927       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0311 12:58:15.247857       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0311 12:58:16.278628       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0311 12:58:52.404020       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0311 13:00:34.502993       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.234.174"}
	
	
	==> kube-controller-manager [ade1f425f1126c92469833140472a84a44830c44c241e54bb235e10ed027c387] <==
	W0311 13:00:10.911840       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 13:00:10.911875       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0311 13:00:12.281552       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 13:00:12.281586       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0311 13:00:32.154144       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 13:00:32.154180       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0311 13:00:34.242340       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0311 13:00:34.281644       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-6cj8v"
	I0311 13:00:34.296944       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="54.67486ms"
	I0311 13:00:34.308322       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="11.256065ms"
	I0311 13:00:34.308583       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="44.397µs"
	I0311 13:00:34.314295       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="82.714µs"
	I0311 13:00:39.078010       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="73.606µs"
	W0311 13:00:39.952035       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 13:00:39.952068       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0311 13:00:40.074731       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="221.451µs"
	I0311 13:00:41.070111       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="79.538µs"
	I0311 13:00:51.725906       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0311 13:00:51.735841       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="5.899µs"
	I0311 13:00:51.751092       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0311 13:00:52.109691       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="55.063µs"
	W0311 13:00:54.576801       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 13:00:54.576836       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0311 13:00:54.803160       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 13:00:54.803193       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [5135080bbf38c9801a52764e164d73158f65a9e454b529bd13d35fc51bf3156c] <==
	I0311 12:55:00.832465       1 server_others.go:69] "Using iptables proxy"
	I0311 12:55:01.070290       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0311 12:55:01.280843       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0311 12:55:01.293621       1 server_others.go:152] "Using iptables Proxier"
	I0311 12:55:01.294124       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0311 12:55:01.294208       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0311 12:55:01.294284       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0311 12:55:01.294991       1 server.go:846] "Version info" version="v1.28.4"
	I0311 12:55:01.295082       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 12:55:01.296265       1 config.go:188] "Starting service config controller"
	I0311 12:55:01.296368       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0311 12:55:01.296415       1 config.go:97] "Starting endpoint slice config controller"
	I0311 12:55:01.296460       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0311 12:55:01.297133       1 config.go:315] "Starting node config controller"
	I0311 12:55:01.297650       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0311 12:55:01.401951       1 shared_informer.go:318] Caches are synced for service config
	I0311 12:55:01.409494       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0311 12:55:01.409813       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [7b43cf1b2e4e4cd8f4ce8b703754b5049b7213d716489e32fc74f7290f8fcf77] <==
	W0311 12:54:37.801382       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0311 12:54:37.801406       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0311 12:54:37.801464       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0311 12:54:37.801479       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0311 12:54:37.802624       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0311 12:54:37.802708       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0311 12:54:37.809502       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0311 12:54:37.809667       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0311 12:54:37.809621       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0311 12:54:37.809768       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0311 12:54:38.618365       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0311 12:54:38.618400       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0311 12:54:38.637134       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0311 12:54:38.637247       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0311 12:54:38.657635       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0311 12:54:38.657744       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0311 12:54:38.688613       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0311 12:54:38.688746       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0311 12:54:38.840832       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0311 12:54:38.840868       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0311 12:54:38.881197       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0311 12:54:38.881306       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0311 12:54:38.961728       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0311 12:54:38.961854       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0311 12:54:41.582900       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 11 13:00:41 addons-127043 kubelet[1487]: E0311 13:00:41.057399    1487 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-6cj8v_default(eb90e594-65fe-4385-a21e-ba46ce3adac0)\"" pod="default/hello-world-app-5d77478584-6cj8v" podUID="eb90e594-65fe-4385-a21e-ba46ce3adac0"
	Mar 11 13:00:45 addons-127043 kubelet[1487]: I0311 13:00:45.734787    1487 scope.go:117] "RemoveContainer" containerID="bfc41f2c8f482ff5d6d95e9af4bcc9208d5f55da05c08e078956779bbd9ef7ae"
	Mar 11 13:00:45 addons-127043 kubelet[1487]: E0311 13:00:45.735092    1487 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(4c63510e-0809-4de4-aba1-ba6ebb45a134)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="4c63510e-0809-4de4-aba1-ba6ebb45a134"
	Mar 11 13:00:50 addons-127043 kubelet[1487]: I0311 13:00:50.425751    1487 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzvkg\" (UniqueName: \"kubernetes.io/projected/4c63510e-0809-4de4-aba1-ba6ebb45a134-kube-api-access-kzvkg\") pod \"4c63510e-0809-4de4-aba1-ba6ebb45a134\" (UID: \"4c63510e-0809-4de4-aba1-ba6ebb45a134\") "
	Mar 11 13:00:50 addons-127043 kubelet[1487]: I0311 13:00:50.430228    1487 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c63510e-0809-4de4-aba1-ba6ebb45a134-kube-api-access-kzvkg" (OuterVolumeSpecName: "kube-api-access-kzvkg") pod "4c63510e-0809-4de4-aba1-ba6ebb45a134" (UID: "4c63510e-0809-4de4-aba1-ba6ebb45a134"). InnerVolumeSpecName "kube-api-access-kzvkg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 11 13:00:50 addons-127043 kubelet[1487]: I0311 13:00:50.526962    1487 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-kzvkg\" (UniqueName: \"kubernetes.io/projected/4c63510e-0809-4de4-aba1-ba6ebb45a134-kube-api-access-kzvkg\") on node \"addons-127043\" DevicePath \"\""
	Mar 11 13:00:51 addons-127043 kubelet[1487]: I0311 13:00:51.077071    1487 scope.go:117] "RemoveContainer" containerID="bfc41f2c8f482ff5d6d95e9af4bcc9208d5f55da05c08e078956779bbd9ef7ae"
	Mar 11 13:00:51 addons-127043 kubelet[1487]: I0311 13:00:51.734746    1487 scope.go:117] "RemoveContainer" containerID="36a471bca4c153cf2386a2cb6736f1b9a224eace96c49bebde1f3224d437ffbd"
	Mar 11 13:00:52 addons-127043 kubelet[1487]: I0311 13:00:52.090425    1487 scope.go:117] "RemoveContainer" containerID="36a471bca4c153cf2386a2cb6736f1b9a224eace96c49bebde1f3224d437ffbd"
	Mar 11 13:00:52 addons-127043 kubelet[1487]: I0311 13:00:52.091224    1487 scope.go:117] "RemoveContainer" containerID="66db594ad5f1f2130a1c50eb31fedcf9d83ea244286c14c9c5f7217fb20af55a"
	Mar 11 13:00:52 addons-127043 kubelet[1487]: E0311 13:00:52.093481    1487 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-6cj8v_default(eb90e594-65fe-4385-a21e-ba46ce3adac0)\"" pod="default/hello-world-app-5d77478584-6cj8v" podUID="eb90e594-65fe-4385-a21e-ba46ce3adac0"
	Mar 11 13:00:52 addons-127043 kubelet[1487]: I0311 13:00:52.736116    1487 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4c63510e-0809-4de4-aba1-ba6ebb45a134" path="/var/lib/kubelet/pods/4c63510e-0809-4de4-aba1-ba6ebb45a134/volumes"
	Mar 11 13:00:52 addons-127043 kubelet[1487]: I0311 13:00:52.736617    1487 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="702d0fb1-e489-4a80-a573-150487925595" path="/var/lib/kubelet/pods/702d0fb1-e489-4a80-a573-150487925595/volumes"
	Mar 11 13:00:52 addons-127043 kubelet[1487]: I0311 13:00:52.736994    1487 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c974c659-e985-4826-9561-638c2c808158" path="/var/lib/kubelet/pods/c974c659-e985-4826-9561-638c2c808158/volumes"
	Mar 11 13:00:55 addons-127043 kubelet[1487]: I0311 13:00:55.065783    1487 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwzhk\" (UniqueName: \"kubernetes.io/projected/5d3055dd-e93d-40a8-bee8-71c6e04b37f1-kube-api-access-lwzhk\") pod \"5d3055dd-e93d-40a8-bee8-71c6e04b37f1\" (UID: \"5d3055dd-e93d-40a8-bee8-71c6e04b37f1\") "
	Mar 11 13:00:55 addons-127043 kubelet[1487]: I0311 13:00:55.065870    1487 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5d3055dd-e93d-40a8-bee8-71c6e04b37f1-webhook-cert\") pod \"5d3055dd-e93d-40a8-bee8-71c6e04b37f1\" (UID: \"5d3055dd-e93d-40a8-bee8-71c6e04b37f1\") "
	Mar 11 13:00:55 addons-127043 kubelet[1487]: I0311 13:00:55.068535    1487 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d3055dd-e93d-40a8-bee8-71c6e04b37f1-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "5d3055dd-e93d-40a8-bee8-71c6e04b37f1" (UID: "5d3055dd-e93d-40a8-bee8-71c6e04b37f1"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Mar 11 13:00:55 addons-127043 kubelet[1487]: I0311 13:00:55.069562    1487 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d3055dd-e93d-40a8-bee8-71c6e04b37f1-kube-api-access-lwzhk" (OuterVolumeSpecName: "kube-api-access-lwzhk") pod "5d3055dd-e93d-40a8-bee8-71c6e04b37f1" (UID: "5d3055dd-e93d-40a8-bee8-71c6e04b37f1"). InnerVolumeSpecName "kube-api-access-lwzhk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 11 13:00:55 addons-127043 kubelet[1487]: I0311 13:00:55.100947    1487 scope.go:117] "RemoveContainer" containerID="3e91569b1aab393a39b72f7273fcbcf364bdc6cb4cdb987e346a5f7622a5d496"
	Mar 11 13:00:55 addons-127043 kubelet[1487]: I0311 13:00:55.121035    1487 scope.go:117] "RemoveContainer" containerID="3e91569b1aab393a39b72f7273fcbcf364bdc6cb4cdb987e346a5f7622a5d496"
	Mar 11 13:00:55 addons-127043 kubelet[1487]: E0311 13:00:55.122417    1487 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e91569b1aab393a39b72f7273fcbcf364bdc6cb4cdb987e346a5f7622a5d496\": container with ID starting with 3e91569b1aab393a39b72f7273fcbcf364bdc6cb4cdb987e346a5f7622a5d496 not found: ID does not exist" containerID="3e91569b1aab393a39b72f7273fcbcf364bdc6cb4cdb987e346a5f7622a5d496"
	Mar 11 13:00:55 addons-127043 kubelet[1487]: I0311 13:00:55.122475    1487 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e91569b1aab393a39b72f7273fcbcf364bdc6cb4cdb987e346a5f7622a5d496"} err="failed to get container status \"3e91569b1aab393a39b72f7273fcbcf364bdc6cb4cdb987e346a5f7622a5d496\": rpc error: code = NotFound desc = could not find container \"3e91569b1aab393a39b72f7273fcbcf364bdc6cb4cdb987e346a5f7622a5d496\": container with ID starting with 3e91569b1aab393a39b72f7273fcbcf364bdc6cb4cdb987e346a5f7622a5d496 not found: ID does not exist"
	Mar 11 13:00:55 addons-127043 kubelet[1487]: I0311 13:00:55.166864    1487 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lwzhk\" (UniqueName: \"kubernetes.io/projected/5d3055dd-e93d-40a8-bee8-71c6e04b37f1-kube-api-access-lwzhk\") on node \"addons-127043\" DevicePath \"\""
	Mar 11 13:00:55 addons-127043 kubelet[1487]: I0311 13:00:55.166914    1487 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5d3055dd-e93d-40a8-bee8-71c6e04b37f1-webhook-cert\") on node \"addons-127043\" DevicePath \"\""
	Mar 11 13:00:56 addons-127043 kubelet[1487]: I0311 13:00:56.736063    1487 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5d3055dd-e93d-40a8-bee8-71c6e04b37f1" path="/var/lib/kubelet/pods/5d3055dd-e93d-40a8-bee8-71c6e04b37f1/volumes"
	
	
	==> storage-provisioner [cddd22dfa41f6b8e96d54e00d6b3202e4ba0749db7f1c52e637a95758b2f8ef4] <==
	I0311 12:55:31.814831       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0311 12:55:31.852144       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0311 12:55:31.852288       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0311 12:55:31.887499       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0311 12:55:31.917642       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-127043_a65e1ffd-6f0e-421f-8314-941959c350d1!
	I0311 12:55:31.917722       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b07493f9-a74e-4d84-a8f6-6f660d3cabf8", APIVersion:"v1", ResourceVersion:"883", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-127043_a65e1ffd-6f0e-421f-8314-941959c350d1 became leader
	I0311 12:55:32.074622       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-127043_a65e1ffd-6f0e-421f-8314-941959c350d1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-127043 -n addons-127043
helpers_test.go:261: (dbg) Run:  kubectl --context addons-127043 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (169.67s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartCluster (127.32s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-992796 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0311 13:14:36.486556 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/functional-112360/client.crt: no such file or directory
E0311 13:15:04.166511 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/functional-112360/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-992796 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m3.059512235s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:589: expected 3 nodes to be Ready, got 
-- stdout --
	NAME            STATUS     ROLES           AGE     VERSION
	ha-992796       NotReady   control-plane   10m     v1.28.4
	ha-992796-m02   Ready      control-plane   9m50s   v1.28.4
	ha-992796-m04   Ready      <none>          7m51s   v1.28.4

                                                
                                                
-- /stdout --
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:597: expected 3 nodes Ready status to be True, got 
-- stdout --
	' Unknown
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMutliControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-992796
helpers_test.go:235: (dbg) docker inspect ha-992796:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "824aa912f5877e99c31410983233e425659e44ded7a4f077c57a2e3d284b2b29",
	        "Created": "2024-03-11T13:05:50.154756171Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1186889,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-11T13:14:33.905300945Z",
	            "FinishedAt": "2024-03-11T13:14:32.932246473Z"
	        },
	        "Image": "sha256:4a9b65157dd7fb2ddb7cb7afe975b3dc288e9877c60d13613a69dd41a70e2e4e",
	        "ResolvConfPath": "/var/lib/docker/containers/824aa912f5877e99c31410983233e425659e44ded7a4f077c57a2e3d284b2b29/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/824aa912f5877e99c31410983233e425659e44ded7a4f077c57a2e3d284b2b29/hostname",
	        "HostsPath": "/var/lib/docker/containers/824aa912f5877e99c31410983233e425659e44ded7a4f077c57a2e3d284b2b29/hosts",
	        "LogPath": "/var/lib/docker/containers/824aa912f5877e99c31410983233e425659e44ded7a4f077c57a2e3d284b2b29/824aa912f5877e99c31410983233e425659e44ded7a4f077c57a2e3d284b2b29-json.log",
	        "Name": "/ha-992796",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-992796:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-992796",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/48ef4241c9ba3c147dd9a7bf25cbd74bccc5fc796a0146297d84c00d9ddd51c2-init/diff:/var/lib/docker/overlay2/4693be53430773dee06d553d71389b6111264113687a037a5053dad5bf06b450/diff",
	                "MergedDir": "/var/lib/docker/overlay2/48ef4241c9ba3c147dd9a7bf25cbd74bccc5fc796a0146297d84c00d9ddd51c2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/48ef4241c9ba3c147dd9a7bf25cbd74bccc5fc796a0146297d84c00d9ddd51c2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/48ef4241c9ba3c147dd9a7bf25cbd74bccc5fc796a0146297d84c00d9ddd51c2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-992796",
	                "Source": "/var/lib/docker/volumes/ha-992796/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-992796",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-992796",
	                "name.minikube.sigs.k8s.io": "ha-992796",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b1110b76d1a4d527c1e30d6c2bec9801b4a4f35ee22ef8a02d1fa52ca56dab25",
	            "SandboxKey": "/var/run/docker/netns/b1110b76d1a4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33992"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33991"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33988"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33990"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33989"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-992796": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "824aa912f587",
	                        "ha-992796"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "9806807e27d79da464f2b0d0d47479bb59f29569b77ed12f0447c709f96fe8ea",
	                    "EndpointID": "3e41115287f99fc6d6c04ed3ccdb7f36fe0876a9ce8190285c489b58133c918a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ha-992796",
	                        "824aa912f587"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-992796 -n ha-992796
helpers_test.go:244: <<< TestMutliControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMutliControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ha-992796 logs -n 25: (1.837098883s)
helpers_test.go:252: TestMutliControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-992796 cp ha-992796-m03:/home/docker/cp-test.txt                              | ha-992796 | jenkins | v1.32.0 | 11 Mar 24 13:09 UTC | 11 Mar 24 13:09 UTC |
	|         | ha-992796-m04:/home/docker/cp-test_ha-992796-m03_ha-992796-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-992796 ssh -n                                                                 | ha-992796 | jenkins | v1.32.0 | 11 Mar 24 13:09 UTC | 11 Mar 24 13:09 UTC |
	|         | ha-992796-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-992796 ssh -n ha-992796-m04 sudo cat                                          | ha-992796 | jenkins | v1.32.0 | 11 Mar 24 13:09 UTC | 11 Mar 24 13:09 UTC |
	|         | /home/docker/cp-test_ha-992796-m03_ha-992796-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-992796 cp testdata/cp-test.txt                                                | ha-992796 | jenkins | v1.32.0 | 11 Mar 24 13:09 UTC | 11 Mar 24 13:09 UTC |
	|         | ha-992796-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-992796 ssh -n                                                                 | ha-992796 | jenkins | v1.32.0 | 11 Mar 24 13:09 UTC | 11 Mar 24 13:09 UTC |
	|         | ha-992796-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-992796 cp ha-992796-m04:/home/docker/cp-test.txt                              | ha-992796 | jenkins | v1.32.0 | 11 Mar 24 13:09 UTC | 11 Mar 24 13:09 UTC |
	|         | /tmp/TestMutliControlPlaneserialCopyFile3414573276/001/cp-test_ha-992796-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-992796 ssh -n                                                                 | ha-992796 | jenkins | v1.32.0 | 11 Mar 24 13:09 UTC | 11 Mar 24 13:09 UTC |
	|         | ha-992796-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-992796 cp ha-992796-m04:/home/docker/cp-test.txt                              | ha-992796 | jenkins | v1.32.0 | 11 Mar 24 13:09 UTC | 11 Mar 24 13:09 UTC |
	|         | ha-992796:/home/docker/cp-test_ha-992796-m04_ha-992796.txt                       |           |         |         |                     |                     |
	| ssh     | ha-992796 ssh -n                                                                 | ha-992796 | jenkins | v1.32.0 | 11 Mar 24 13:09 UTC | 11 Mar 24 13:09 UTC |
	|         | ha-992796-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-992796 ssh -n ha-992796 sudo cat                                              | ha-992796 | jenkins | v1.32.0 | 11 Mar 24 13:09 UTC | 11 Mar 24 13:09 UTC |
	|         | /home/docker/cp-test_ha-992796-m04_ha-992796.txt                                 |           |         |         |                     |                     |
	| cp      | ha-992796 cp ha-992796-m04:/home/docker/cp-test.txt                              | ha-992796 | jenkins | v1.32.0 | 11 Mar 24 13:09 UTC | 11 Mar 24 13:09 UTC |
	|         | ha-992796-m02:/home/docker/cp-test_ha-992796-m04_ha-992796-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-992796 ssh -n                                                                 | ha-992796 | jenkins | v1.32.0 | 11 Mar 24 13:09 UTC | 11 Mar 24 13:09 UTC |
	|         | ha-992796-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-992796 ssh -n ha-992796-m02 sudo cat                                          | ha-992796 | jenkins | v1.32.0 | 11 Mar 24 13:09 UTC | 11 Mar 24 13:09 UTC |
	|         | /home/docker/cp-test_ha-992796-m04_ha-992796-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-992796 cp ha-992796-m04:/home/docker/cp-test.txt                              | ha-992796 | jenkins | v1.32.0 | 11 Mar 24 13:09 UTC | 11 Mar 24 13:09 UTC |
	|         | ha-992796-m03:/home/docker/cp-test_ha-992796-m04_ha-992796-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-992796 ssh -n                                                                 | ha-992796 | jenkins | v1.32.0 | 11 Mar 24 13:09 UTC | 11 Mar 24 13:09 UTC |
	|         | ha-992796-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-992796 ssh -n ha-992796-m03 sudo cat                                          | ha-992796 | jenkins | v1.32.0 | 11 Mar 24 13:09 UTC | 11 Mar 24 13:09 UTC |
	|         | /home/docker/cp-test_ha-992796-m04_ha-992796-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-992796 node stop m02 -v=7                                                     | ha-992796 | jenkins | v1.32.0 | 11 Mar 24 13:09 UTC | 11 Mar 24 13:09 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-992796 node start m02 -v=7                                                    | ha-992796 | jenkins | v1.32.0 | 11 Mar 24 13:09 UTC | 11 Mar 24 13:09 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-992796 -v=7                                                           | ha-992796 | jenkins | v1.32.0 | 11 Mar 24 13:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-992796 -v=7                                                                | ha-992796 | jenkins | v1.32.0 | 11 Mar 24 13:10 UTC | 11 Mar 24 13:10 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-992796 --wait=true -v=7                                                    | ha-992796 | jenkins | v1.32.0 | 11 Mar 24 13:10 UTC | 11 Mar 24 13:13 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-992796                                                                | ha-992796 | jenkins | v1.32.0 | 11 Mar 24 13:13 UTC |                     |
	| node    | ha-992796 node delete m03 -v=7                                                   | ha-992796 | jenkins | v1.32.0 | 11 Mar 24 13:13 UTC | 11 Mar 24 13:13 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-992796 stop -v=7                                                              | ha-992796 | jenkins | v1.32.0 | 11 Mar 24 13:13 UTC | 11 Mar 24 13:14 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-992796 --wait=true                                                         | ha-992796 | jenkins | v1.32.0 | 11 Mar 24 13:14 UTC | 11 Mar 24 13:16 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=docker                                                                  |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 13:14:33
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 13:14:33.364480 1186705 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:14:33.364705 1186705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:14:33.364734 1186705 out.go:304] Setting ErrFile to fd 2...
	I0311 13:14:33.364755 1186705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:14:33.365002 1186705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-1124504/.minikube/bin
	I0311 13:14:33.365401 1186705 out.go:298] Setting JSON to false
	I0311 13:14:33.366321 1186705 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17821,"bootTime":1710145053,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0311 13:14:33.366447 1186705 start.go:139] virtualization:  
	I0311 13:14:33.369702 1186705 out.go:177] * [ha-992796] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0311 13:14:33.371967 1186705 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 13:14:33.374033 1186705 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 13:14:33.372040 1186705 notify.go:220] Checking for updates...
	I0311 13:14:33.376085 1186705 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18350-1124504/kubeconfig
	I0311 13:14:33.378322 1186705 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-1124504/.minikube
	I0311 13:14:33.380445 1186705 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0311 13:14:33.382501 1186705 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 13:14:33.385389 1186705 config.go:182] Loaded profile config "ha-992796": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 13:14:33.385934 1186705 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 13:14:33.407186 1186705 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0311 13:14:33.407311 1186705 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 13:14:33.477465 1186705 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:46 SystemTime:2024-03-11 13:14:33.467359629 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 13:14:33.477583 1186705 docker.go:295] overlay module found
	I0311 13:14:33.480126 1186705 out.go:177] * Using the docker driver based on existing profile
	I0311 13:14:33.482154 1186705 start.go:297] selected driver: docker
	I0311 13:14:33.482170 1186705 start.go:901] validating driver "docker" against &{Name:ha-992796 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-992796 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kub
evirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:14:33.482430 1186705 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 13:14:33.482530 1186705 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 13:14:33.542798 1186705 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:46 SystemTime:2024-03-11 13:14:33.533866014 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 13:14:33.543249 1186705 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 13:14:33.543318 1186705 cni.go:84] Creating CNI manager for ""
	I0311 13:14:33.543334 1186705 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0311 13:14:33.543385 1186705 start.go:340] cluster config:
	{Name:ha-992796 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-992796 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device
-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:14:33.547245 1186705 out.go:177] * Starting "ha-992796" primary control-plane node in "ha-992796" cluster
	I0311 13:14:33.549429 1186705 cache.go:121] Beginning downloading kic base image for docker with crio
	I0311 13:14:33.551252 1186705 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0311 13:14:33.553273 1186705 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 13:14:33.553328 1186705 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18350-1124504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0311 13:14:33.553356 1186705 cache.go:56] Caching tarball of preloaded images
	I0311 13:14:33.553399 1186705 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0311 13:14:33.553443 1186705 preload.go:173] Found /home/jenkins/minikube-integration/18350-1124504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0311 13:14:33.553461 1186705 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0311 13:14:33.553611 1186705 profile.go:142] Saving config to /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/config.json ...
	I0311 13:14:33.569088 1186705 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0311 13:14:33.569113 1186705 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0311 13:14:33.569137 1186705 cache.go:194] Successfully downloaded all kic artifacts
	I0311 13:14:33.569166 1186705 start.go:360] acquireMachinesLock for ha-992796: {Name:mk940027cdb28a61d6dd69f93d66138f20c9a797 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 13:14:33.569250 1186705 start.go:364] duration metric: took 58.534µs to acquireMachinesLock for "ha-992796"
	I0311 13:14:33.569274 1186705 start.go:96] Skipping create...Using existing machine configuration
	I0311 13:14:33.569279 1186705 fix.go:54] fixHost starting: 
	I0311 13:14:33.569625 1186705 cli_runner.go:164] Run: docker container inspect ha-992796 --format={{.State.Status}}
	I0311 13:14:33.585077 1186705 fix.go:112] recreateIfNeeded on ha-992796: state=Stopped err=<nil>
	W0311 13:14:33.585117 1186705 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 13:14:33.587650 1186705 out.go:177] * Restarting existing docker container for "ha-992796" ...
	I0311 13:14:33.589312 1186705 cli_runner.go:164] Run: docker start ha-992796
	I0311 13:14:33.913068 1186705 cli_runner.go:164] Run: docker container inspect ha-992796 --format={{.State.Status}}
	I0311 13:14:33.935989 1186705 kic.go:430] container "ha-992796" state is running.
	I0311 13:14:33.936386 1186705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-992796
	I0311 13:14:33.962268 1186705 profile.go:142] Saving config to /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/config.json ...
	I0311 13:14:33.962518 1186705 machine.go:94] provisionDockerMachine start ...
	I0311 13:14:33.962593 1186705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796
	I0311 13:14:33.982038 1186705 main.go:141] libmachine: Using SSH client type: native
	I0311 13:14:33.982349 1186705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 33992 <nil> <nil>}
	I0311 13:14:33.982365 1186705 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 13:14:33.982984 1186705 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49852->127.0.0.1:33992: read: connection reset by peer
	I0311 13:14:37.133394 1186705 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-992796
	
	I0311 13:14:37.133473 1186705 ubuntu.go:169] provisioning hostname "ha-992796"
	I0311 13:14:37.133581 1186705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796
	I0311 13:14:37.150578 1186705 main.go:141] libmachine: Using SSH client type: native
	I0311 13:14:37.150834 1186705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 33992 <nil> <nil>}
	I0311 13:14:37.150850 1186705 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-992796 && echo "ha-992796" | sudo tee /etc/hostname
	I0311 13:14:37.293151 1186705 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-992796
	
	I0311 13:14:37.293238 1186705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796
	I0311 13:14:37.315218 1186705 main.go:141] libmachine: Using SSH client type: native
	I0311 13:14:37.315479 1186705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 33992 <nil> <nil>}
	I0311 13:14:37.315501 1186705 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-992796' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-992796/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-992796' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 13:14:37.445258 1186705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 13:14:37.445282 1186705 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18350-1124504/.minikube CaCertPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18350-1124504/.minikube}
	I0311 13:14:37.445308 1186705 ubuntu.go:177] setting up certificates
	I0311 13:14:37.445318 1186705 provision.go:84] configureAuth start
	I0311 13:14:37.445404 1186705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-992796
	I0311 13:14:37.461314 1186705 provision.go:143] copyHostCerts
	I0311 13:14:37.461407 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.pem
	I0311 13:14:37.461458 1186705 exec_runner.go:144] found /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.pem, removing ...
	I0311 13:14:37.461476 1186705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.pem
	I0311 13:14:37.461549 1186705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.pem (1078 bytes)
	I0311 13:14:37.461645 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18350-1124504/.minikube/cert.pem
	I0311 13:14:37.461667 1186705 exec_runner.go:144] found /home/jenkins/minikube-integration/18350-1124504/.minikube/cert.pem, removing ...
	I0311 13:14:37.461676 1186705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18350-1124504/.minikube/cert.pem
	I0311 13:14:37.461705 1186705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18350-1124504/.minikube/cert.pem (1123 bytes)
	I0311 13:14:37.461758 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18350-1124504/.minikube/key.pem
	I0311 13:14:37.461777 1186705 exec_runner.go:144] found /home/jenkins/minikube-integration/18350-1124504/.minikube/key.pem, removing ...
	I0311 13:14:37.461782 1186705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18350-1124504/.minikube/key.pem
	I0311 13:14:37.461807 1186705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18350-1124504/.minikube/key.pem (1675 bytes)
	I0311 13:14:37.461877 1186705 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca-key.pem org=jenkins.ha-992796 san=[127.0.0.1 192.168.49.2 ha-992796 localhost minikube]
	I0311 13:14:37.983641 1186705 provision.go:177] copyRemoteCerts
	I0311 13:14:37.983714 1186705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 13:14:37.983756 1186705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796
	I0311 13:14:38.003258 1186705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33992 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/ha-992796/id_rsa Username:docker}
	I0311 13:14:38.099351 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0311 13:14:38.099409 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0311 13:14:38.123534 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0311 13:14:38.123599 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0311 13:14:38.147539 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0311 13:14:38.147605 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 13:14:38.171027 1186705 provision.go:87] duration metric: took 725.679543ms to configureAuth
	I0311 13:14:38.171057 1186705 ubuntu.go:193] setting minikube options for container-runtime
	I0311 13:14:38.171332 1186705 config.go:182] Loaded profile config "ha-992796": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 13:14:38.171439 1186705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796
	I0311 13:14:38.187329 1186705 main.go:141] libmachine: Using SSH client type: native
	I0311 13:14:38.187600 1186705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 33992 <nil> <nil>}
	I0311 13:14:38.187625 1186705 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 13:14:38.569012 1186705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 13:14:38.569036 1186705 machine.go:97] duration metric: took 4.60650114s to provisionDockerMachine
	I0311 13:14:38.569049 1186705 start.go:293] postStartSetup for "ha-992796" (driver="docker")
	I0311 13:14:38.569069 1186705 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 13:14:38.569150 1186705 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 13:14:38.569222 1186705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796
	I0311 13:14:38.586047 1186705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33992 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/ha-992796/id_rsa Username:docker}
	I0311 13:14:38.683252 1186705 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 13:14:38.686371 1186705 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0311 13:14:38.686406 1186705 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0311 13:14:38.686420 1186705 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0311 13:14:38.686430 1186705 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0311 13:14:38.686441 1186705 filesync.go:126] Scanning /home/jenkins/minikube-integration/18350-1124504/.minikube/addons for local assets ...
	I0311 13:14:38.686508 1186705 filesync.go:126] Scanning /home/jenkins/minikube-integration/18350-1124504/.minikube/files for local assets ...
	I0311 13:14:38.686588 1186705 filesync.go:149] local asset: /home/jenkins/minikube-integration/18350-1124504/.minikube/files/etc/ssl/certs/11299062.pem -> 11299062.pem in /etc/ssl/certs
	I0311 13:14:38.686600 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/files/etc/ssl/certs/11299062.pem -> /etc/ssl/certs/11299062.pem
	I0311 13:14:38.686705 1186705 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 13:14:38.694881 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/files/etc/ssl/certs/11299062.pem --> /etc/ssl/certs/11299062.pem (1708 bytes)
	I0311 13:14:38.717819 1186705 start.go:296] duration metric: took 148.754627ms for postStartSetup
	I0311 13:14:38.717913 1186705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 13:14:38.717958 1186705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796
	I0311 13:14:38.733536 1186705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33992 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/ha-992796/id_rsa Username:docker}
	I0311 13:14:38.822129 1186705 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0311 13:14:38.826651 1186705 fix.go:56] duration metric: took 5.257364944s for fixHost
	I0311 13:14:38.826678 1186705 start.go:83] releasing machines lock for "ha-992796", held for 5.257418161s
	I0311 13:14:38.826757 1186705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-992796
	I0311 13:14:38.842266 1186705 ssh_runner.go:195] Run: cat /version.json
	I0311 13:14:38.842327 1186705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796
	I0311 13:14:38.842393 1186705 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 13:14:38.842465 1186705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796
	I0311 13:14:38.860819 1186705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33992 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/ha-992796/id_rsa Username:docker}
	I0311 13:14:38.863058 1186705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33992 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/ha-992796/id_rsa Username:docker}
	I0311 13:14:39.065333 1186705 ssh_runner.go:195] Run: systemctl --version
	I0311 13:14:39.070167 1186705 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 13:14:39.214735 1186705 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0311 13:14:39.219620 1186705 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 13:14:39.230258 1186705 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0311 13:14:39.230336 1186705 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 13:14:39.240050 1186705 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0311 13:14:39.240073 1186705 start.go:494] detecting cgroup driver to use...
	I0311 13:14:39.240105 1186705 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0311 13:14:39.240158 1186705 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 13:14:39.254608 1186705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 13:14:39.266718 1186705 docker.go:217] disabling cri-docker service (if available) ...
	I0311 13:14:39.266803 1186705 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 13:14:39.281974 1186705 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 13:14:39.293676 1186705 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 13:14:39.375502 1186705 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 13:14:39.467227 1186705 docker.go:233] disabling docker service ...
	I0311 13:14:39.467341 1186705 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 13:14:39.479588 1186705 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 13:14:39.491513 1186705 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 13:14:39.590304 1186705 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 13:14:39.684107 1186705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 13:14:39.695897 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 13:14:39.712786 1186705 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 13:14:39.712872 1186705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 13:14:39.722946 1186705 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 13:14:39.723076 1186705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 13:14:39.734106 1186705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 13:14:39.744983 1186705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 13:14:39.755119 1186705 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 13:14:39.764433 1186705 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 13:14:39.772936 1186705 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 13:14:39.781365 1186705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:14:39.875909 1186705 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 13:14:39.996676 1186705 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 13:14:39.996749 1186705 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 13:14:40.001526 1186705 start.go:562] Will wait 60s for crictl version
	I0311 13:14:40.001806 1186705 ssh_runner.go:195] Run: which crictl
	I0311 13:14:40.024381 1186705 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 13:14:40.073607 1186705 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0311 13:14:40.073788 1186705 ssh_runner.go:195] Run: crio --version
	I0311 13:14:40.115647 1186705 ssh_runner.go:195] Run: crio --version
	I0311 13:14:40.163483 1186705 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0311 13:14:40.165554 1186705 cli_runner.go:164] Run: docker network inspect ha-992796 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0311 13:14:40.181716 1186705 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0311 13:14:40.185974 1186705 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 13:14:40.198622 1186705 kubeadm.go:877] updating cluster {Name:ha-992796 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-992796 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false l
ogviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 13:14:40.198791 1186705 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 13:14:40.198861 1186705 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 13:14:40.246723 1186705 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 13:14:40.246747 1186705 crio.go:415] Images already preloaded, skipping extraction
	I0311 13:14:40.246805 1186705 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 13:14:40.284449 1186705 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 13:14:40.284474 1186705 cache_images.go:84] Images are preloaded, skipping loading
	I0311 13:14:40.284483 1186705 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.28.4 crio true true} ...
	I0311 13:14:40.284597 1186705 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-992796 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-992796 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 13:14:40.284685 1186705 ssh_runner.go:195] Run: crio config
	I0311 13:14:40.342475 1186705 cni.go:84] Creating CNI manager for ""
	I0311 13:14:40.342498 1186705 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0311 13:14:40.342509 1186705 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 13:14:40.342537 1186705 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-992796 NodeName:ha-992796 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 13:14:40.342682 1186705 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-992796"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 13:14:40.342701 1186705 kube-vip.go:101] generating kube-vip config ...
	I0311 13:14:40.342777 1186705 kube-vip.go:121] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0311 13:14:40.342841 1186705 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 13:14:40.351683 1186705 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 13:14:40.351754 1186705 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0311 13:14:40.360297 1186705 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0311 13:14:40.378351 1186705 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 13:14:40.396539 1186705 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0311 13:14:40.414495 1186705 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1263 bytes)
	I0311 13:14:40.432872 1186705 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0311 13:14:40.436227 1186705 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 13:14:40.447103 1186705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:14:40.540328 1186705 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 13:14:40.554389 1186705 certs.go:68] Setting up /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796 for IP: 192.168.49.2
	I0311 13:14:40.554459 1186705 certs.go:194] generating shared ca certs ...
	I0311 13:14:40.554490 1186705 certs.go:226] acquiring lock for ca certs: {Name:mk30659f158a045ae3a6809b62fbd61891660c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:14:40.554666 1186705 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.key
	I0311 13:14:40.554770 1186705 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/proxy-client-ca.key
	I0311 13:14:40.554818 1186705 certs.go:256] generating profile certs ...
	I0311 13:14:40.554939 1186705 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/client.key
	I0311 13:14:40.554994 1186705 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/apiserver.key.6ebad525
	I0311 13:14:40.555033 1186705 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/apiserver.crt.6ebad525 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0311 13:14:40.780611 1186705 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/apiserver.crt.6ebad525 ...
	I0311 13:14:40.780645 1186705 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/apiserver.crt.6ebad525: {Name:mk896345c9a85dfe77a72d0dc3632d2362ee0146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:14:40.780849 1186705 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/apiserver.key.6ebad525 ...
	I0311 13:14:40.780872 1186705 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/apiserver.key.6ebad525: {Name:mk9effdad27a3b11786d00356d90fbd062505fbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:14:40.780965 1186705 certs.go:381] copying /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/apiserver.crt.6ebad525 -> /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/apiserver.crt
	I0311 13:14:40.781105 1186705 certs.go:385] copying /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/apiserver.key.6ebad525 -> /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/apiserver.key
	I0311 13:14:40.781234 1186705 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/proxy-client.key
	I0311 13:14:40.781252 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0311 13:14:40.781268 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0311 13:14:40.781284 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0311 13:14:40.781302 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0311 13:14:40.781316 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0311 13:14:40.781327 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0311 13:14:40.781363 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0311 13:14:40.781379 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0311 13:14:40.781436 1186705 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/1129906.pem (1338 bytes)
	W0311 13:14:40.781475 1186705 certs.go:480] ignoring /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/1129906_empty.pem, impossibly tiny 0 bytes
	I0311 13:14:40.781488 1186705 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 13:14:40.781514 1186705 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca.pem (1078 bytes)
	I0311 13:14:40.781541 1186705 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/cert.pem (1123 bytes)
	I0311 13:14:40.781565 1186705 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/key.pem (1675 bytes)
	I0311 13:14:40.781610 1186705 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/files/etc/ssl/certs/11299062.pem (1708 bytes)
	I0311 13:14:40.781644 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0311 13:14:40.781666 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/1129906.pem -> /usr/share/ca-certificates/1129906.pem
	I0311 13:14:40.781681 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/files/etc/ssl/certs/11299062.pem -> /usr/share/ca-certificates/11299062.pem
	I0311 13:14:40.782341 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 13:14:40.809313 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 13:14:40.838862 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 13:14:40.865016 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 13:14:40.889489 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0311 13:14:40.914021 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0311 13:14:40.936968 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 13:14:40.961262 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0311 13:14:40.985540 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 13:14:41.011653 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/1129906.pem --> /usr/share/ca-certificates/1129906.pem (1338 bytes)
	I0311 13:14:41.036080 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/files/etc/ssl/certs/11299062.pem --> /usr/share/ca-certificates/11299062.pem (1708 bytes)
	I0311 13:14:41.060785 1186705 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 13:14:41.079366 1186705 ssh_runner.go:195] Run: openssl version
	I0311 13:14:41.084855 1186705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 13:14:41.094660 1186705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 13:14:41.098175 1186705 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 12:54 /usr/share/ca-certificates/minikubeCA.pem
	I0311 13:14:41.098277 1186705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 13:14:41.104955 1186705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 13:14:41.114082 1186705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1129906.pem && ln -fs /usr/share/ca-certificates/1129906.pem /etc/ssl/certs/1129906.pem"
	I0311 13:14:41.123723 1186705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1129906.pem
	I0311 13:14:41.127379 1186705 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 13:02 /usr/share/ca-certificates/1129906.pem
	I0311 13:14:41.127451 1186705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1129906.pem
	I0311 13:14:41.134581 1186705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1129906.pem /etc/ssl/certs/51391683.0"
	I0311 13:14:41.143571 1186705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11299062.pem && ln -fs /usr/share/ca-certificates/11299062.pem /etc/ssl/certs/11299062.pem"
	I0311 13:14:41.153352 1186705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11299062.pem
	I0311 13:14:41.157009 1186705 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 13:02 /usr/share/ca-certificates/11299062.pem
	I0311 13:14:41.157088 1186705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11299062.pem
	I0311 13:14:41.164102 1186705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11299062.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 13:14:41.173531 1186705 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 13:14:41.177193 1186705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 13:14:41.184142 1186705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 13:14:41.191276 1186705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 13:14:41.198229 1186705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 13:14:41.205272 1186705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 13:14:41.212271 1186705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 13:14:41.219278 1186705 kubeadm.go:391] StartCluster: {Name:ha-992796 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-992796 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logv
iewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:14:41.219404 1186705 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 13:14:41.219490 1186705 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 13:14:41.256403 1186705 cri.go:89] found id: ""
	I0311 13:14:41.256513 1186705 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 13:14:41.265788 1186705 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 13:14:41.265811 1186705 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 13:14:41.265817 1186705 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 13:14:41.265869 1186705 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 13:14:41.274571 1186705 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 13:14:41.275005 1186705 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-992796" does not appear in /home/jenkins/minikube-integration/18350-1124504/kubeconfig
	I0311 13:14:41.275125 1186705 kubeconfig.go:62] /home/jenkins/minikube-integration/18350-1124504/kubeconfig needs updating (will repair): [kubeconfig missing "ha-992796" cluster setting kubeconfig missing "ha-992796" context setting]
	I0311 13:14:41.275517 1186705 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-1124504/kubeconfig: {Name:mk1044b4a136be32fc018b928173d9e5fa18a2ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:14:41.275932 1186705 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18350-1124504/kubeconfig
	I0311 13:14:41.276171 1186705 kapi.go:59] client config for ha-992796: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/client.crt", KeyFile:"/home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/client.key", CAFile:"/home/jenkins/minikube-integration/18350-1124504/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16fd7a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0311 13:14:41.276833 1186705 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 13:14:41.276920 1186705 cert_rotation.go:137] Starting client certificate rotation controller
	I0311 13:14:41.285956 1186705 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.49.2
	I0311 13:14:41.285977 1186705 kubeadm.go:591] duration metric: took 20.155172ms to restartPrimaryControlPlane
	I0311 13:14:41.285986 1186705 kubeadm.go:393] duration metric: took 66.726942ms to StartCluster
	I0311 13:14:41.286005 1186705 settings.go:142] acquiring lock: {Name:mk0a76f674884ed0c489dd40a16d57ce9e1cba50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:14:41.286063 1186705 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18350-1124504/kubeconfig
	I0311 13:14:41.286693 1186705 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18350-1124504/kubeconfig: {Name:mk1044b4a136be32fc018b928173d9e5fa18a2ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:14:41.286889 1186705 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 13:14:41.286916 1186705 start.go:240] waiting for startup goroutines ...
	I0311 13:14:41.286937 1186705 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 13:14:41.291715 1186705 out.go:177] * Enabled addons: 
	I0311 13:14:41.287401 1186705 config.go:182] Loaded profile config "ha-992796": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 13:14:41.293591 1186705 addons.go:505] duration metric: took 6.661307ms for enable addons: enabled=[]
	I0311 13:14:41.293627 1186705 start.go:245] waiting for cluster config update ...
	I0311 13:14:41.293637 1186705 start.go:254] writing updated cluster config ...
	I0311 13:14:41.296365 1186705 out.go:177] 
	I0311 13:14:41.298522 1186705 config.go:182] Loaded profile config "ha-992796": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 13:14:41.298640 1186705 profile.go:142] Saving config to /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/config.json ...
	I0311 13:14:41.301236 1186705 out.go:177] * Starting "ha-992796-m02" control-plane node in "ha-992796" cluster
	I0311 13:14:41.302995 1186705 cache.go:121] Beginning downloading kic base image for docker with crio
	I0311 13:14:41.304897 1186705 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0311 13:14:41.306978 1186705 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 13:14:41.307011 1186705 cache.go:56] Caching tarball of preloaded images
	I0311 13:14:41.307075 1186705 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0311 13:14:41.307142 1186705 preload.go:173] Found /home/jenkins/minikube-integration/18350-1124504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0311 13:14:41.307157 1186705 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0311 13:14:41.307289 1186705 profile.go:142] Saving config to /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/config.json ...
	I0311 13:14:41.322066 1186705 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0311 13:14:41.322093 1186705 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0311 13:14:41.322112 1186705 cache.go:194] Successfully downloaded all kic artifacts
	I0311 13:14:41.322142 1186705 start.go:360] acquireMachinesLock for ha-992796-m02: {Name:mk0ebff418bfec5c87eb2c7607cbab748d71ecdc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 13:14:41.322208 1186705 start.go:364] duration metric: took 43.806µs to acquireMachinesLock for "ha-992796-m02"
	I0311 13:14:41.322232 1186705 start.go:96] Skipping create...Using existing machine configuration
	I0311 13:14:41.322243 1186705 fix.go:54] fixHost starting: m02
	I0311 13:14:41.322531 1186705 cli_runner.go:164] Run: docker container inspect ha-992796-m02 --format={{.State.Status}}
	I0311 13:14:41.337711 1186705 fix.go:112] recreateIfNeeded on ha-992796-m02: state=Stopped err=<nil>
	W0311 13:14:41.337742 1186705 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 13:14:41.341990 1186705 out.go:177] * Restarting existing docker container for "ha-992796-m02" ...
	I0311 13:14:41.344259 1186705 cli_runner.go:164] Run: docker start ha-992796-m02
	I0311 13:14:41.633649 1186705 cli_runner.go:164] Run: docker container inspect ha-992796-m02 --format={{.State.Status}}
	I0311 13:14:41.651002 1186705 kic.go:430] container "ha-992796-m02" state is running.
	I0311 13:14:41.651478 1186705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-992796-m02
	I0311 13:14:41.676635 1186705 profile.go:142] Saving config to /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/config.json ...
	I0311 13:14:41.676874 1186705 machine.go:94] provisionDockerMachine start ...
	I0311 13:14:41.676930 1186705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796-m02
	I0311 13:14:41.696818 1186705 main.go:141] libmachine: Using SSH client type: native
	I0311 13:14:41.697060 1186705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 33997 <nil> <nil>}
	I0311 13:14:41.697069 1186705 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 13:14:41.698094 1186705 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58372->127.0.0.1:33997: read: connection reset by peer
	I0311 13:14:44.881946 1186705 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-992796-m02
	
	I0311 13:14:44.881967 1186705 ubuntu.go:169] provisioning hostname "ha-992796-m02"
	I0311 13:14:44.882044 1186705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796-m02
	I0311 13:14:44.910381 1186705 main.go:141] libmachine: Using SSH client type: native
	I0311 13:14:44.910633 1186705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 33997 <nil> <nil>}
	I0311 13:14:44.910645 1186705 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-992796-m02 && echo "ha-992796-m02" | sudo tee /etc/hostname
	I0311 13:14:45.118723 1186705 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-992796-m02
	
	I0311 13:14:45.118821 1186705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796-m02
	I0311 13:14:45.159134 1186705 main.go:141] libmachine: Using SSH client type: native
	I0311 13:14:45.159409 1186705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 33997 <nil> <nil>}
	I0311 13:14:45.159434 1186705 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-992796-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-992796-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-992796-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 13:14:45.362993 1186705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 13:14:45.363030 1186705 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18350-1124504/.minikube CaCertPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18350-1124504/.minikube}
	I0311 13:14:45.363049 1186705 ubuntu.go:177] setting up certificates
	I0311 13:14:45.363059 1186705 provision.go:84] configureAuth start
	I0311 13:14:45.363133 1186705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-992796-m02
	I0311 13:14:45.437380 1186705 provision.go:143] copyHostCerts
	I0311 13:14:45.437423 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18350-1124504/.minikube/cert.pem
	I0311 13:14:45.437465 1186705 exec_runner.go:144] found /home/jenkins/minikube-integration/18350-1124504/.minikube/cert.pem, removing ...
	I0311 13:14:45.437476 1186705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18350-1124504/.minikube/cert.pem
	I0311 13:14:45.437554 1186705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18350-1124504/.minikube/cert.pem (1123 bytes)
	I0311 13:14:45.437647 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18350-1124504/.minikube/key.pem
	I0311 13:14:45.437670 1186705 exec_runner.go:144] found /home/jenkins/minikube-integration/18350-1124504/.minikube/key.pem, removing ...
	I0311 13:14:45.437680 1186705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18350-1124504/.minikube/key.pem
	I0311 13:14:45.437708 1186705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18350-1124504/.minikube/key.pem (1675 bytes)
	I0311 13:14:45.437760 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.pem
	I0311 13:14:45.437781 1186705 exec_runner.go:144] found /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.pem, removing ...
	I0311 13:14:45.437790 1186705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.pem
	I0311 13:14:45.437826 1186705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.pem (1078 bytes)
	I0311 13:14:45.437877 1186705 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca-key.pem org=jenkins.ha-992796-m02 san=[127.0.0.1 192.168.49.3 ha-992796-m02 localhost minikube]
	I0311 13:14:45.789689 1186705 provision.go:177] copyRemoteCerts
	I0311 13:14:45.789766 1186705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 13:14:45.789812 1186705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796-m02
	I0311 13:14:45.813631 1186705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33997 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/ha-992796-m02/id_rsa Username:docker}
	I0311 13:14:45.943026 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0311 13:14:45.943082 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0311 13:14:45.994158 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0311 13:14:45.994266 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0311 13:14:46.064449 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0311 13:14:46.064515 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0311 13:14:46.131208 1186705 provision.go:87] duration metric: took 768.13155ms to configureAuth
	I0311 13:14:46.131241 1186705 ubuntu.go:193] setting minikube options for container-runtime
	I0311 13:14:46.131492 1186705 config.go:182] Loaded profile config "ha-992796": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 13:14:46.131599 1186705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796-m02
	I0311 13:14:46.168326 1186705 main.go:141] libmachine: Using SSH client type: native
	I0311 13:14:46.168571 1186705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 33997 <nil> <nil>}
	I0311 13:14:46.168586 1186705 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 13:14:46.634388 1186705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 13:14:46.634414 1186705 machine.go:97] duration metric: took 4.95752943s to provisionDockerMachine
	I0311 13:14:46.634426 1186705 start.go:293] postStartSetup for "ha-992796-m02" (driver="docker")
	I0311 13:14:46.634458 1186705 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 13:14:46.634560 1186705 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 13:14:46.634611 1186705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796-m02
	I0311 13:14:46.653811 1186705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33997 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/ha-992796-m02/id_rsa Username:docker}
	I0311 13:14:46.764577 1186705 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 13:14:46.768877 1186705 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0311 13:14:46.768922 1186705 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0311 13:14:46.768933 1186705 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0311 13:14:46.768943 1186705 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0311 13:14:46.768952 1186705 filesync.go:126] Scanning /home/jenkins/minikube-integration/18350-1124504/.minikube/addons for local assets ...
	I0311 13:14:46.769003 1186705 filesync.go:126] Scanning /home/jenkins/minikube-integration/18350-1124504/.minikube/files for local assets ...
	I0311 13:14:46.769082 1186705 filesync.go:149] local asset: /home/jenkins/minikube-integration/18350-1124504/.minikube/files/etc/ssl/certs/11299062.pem -> 11299062.pem in /etc/ssl/certs
	I0311 13:14:46.769093 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/files/etc/ssl/certs/11299062.pem -> /etc/ssl/certs/11299062.pem
	I0311 13:14:46.769189 1186705 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 13:14:46.787087 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/files/etc/ssl/certs/11299062.pem --> /etc/ssl/certs/11299062.pem (1708 bytes)
	I0311 13:14:46.883235 1186705 start.go:296] duration metric: took 248.781914ms for postStartSetup
	I0311 13:14:46.883331 1186705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 13:14:46.883396 1186705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796-m02
	I0311 13:14:46.907644 1186705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33997 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/ha-992796-m02/id_rsa Username:docker}
	I0311 13:14:47.027078 1186705 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0311 13:14:47.043984 1186705 fix.go:56] duration metric: took 5.721734765s for fixHost
	I0311 13:14:47.044012 1186705 start.go:83] releasing machines lock for "ha-992796-m02", held for 5.721792208s
	I0311 13:14:47.044097 1186705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-992796-m02
	I0311 13:14:47.069086 1186705 out.go:177] * Found network options:
	I0311 13:14:47.070954 1186705 out.go:177]   - NO_PROXY=192.168.49.2
	W0311 13:14:47.072952 1186705 proxy.go:119] fail to check proxy env: Error ip not in block
	W0311 13:14:47.073019 1186705 proxy.go:119] fail to check proxy env: Error ip not in block
	I0311 13:14:47.073135 1186705 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 13:14:47.073213 1186705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796-m02
	I0311 13:14:47.073440 1186705 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 13:14:47.073485 1186705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796-m02
	I0311 13:14:47.103865 1186705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33997 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/ha-992796-m02/id_rsa Username:docker}
	I0311 13:14:47.122552 1186705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33997 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/ha-992796-m02/id_rsa Username:docker}
	I0311 13:14:47.474263 1186705 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0311 13:14:47.521401 1186705 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 13:14:47.566664 1186705 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0311 13:14:47.566753 1186705 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 13:14:47.598864 1186705 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0311 13:14:47.598932 1186705 start.go:494] detecting cgroup driver to use...
	I0311 13:14:47.598978 1186705 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0311 13:14:47.599050 1186705 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 13:14:47.638866 1186705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 13:14:47.700656 1186705 docker.go:217] disabling cri-docker service (if available) ...
	I0311 13:14:47.700784 1186705 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 13:14:47.753736 1186705 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 13:14:47.793285 1186705 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 13:14:48.174925 1186705 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 13:14:48.422361 1186705 docker.go:233] disabling docker service ...
	I0311 13:14:48.422488 1186705 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 13:14:48.481898 1186705 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 13:14:48.525852 1186705 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 13:14:48.833070 1186705 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 13:14:49.179377 1186705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 13:14:49.216508 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 13:14:49.290594 1186705 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 13:14:49.290722 1186705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 13:14:49.348185 1186705 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 13:14:49.348315 1186705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 13:14:49.398689 1186705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 13:14:49.434286 1186705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 13:14:49.476460 1186705 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 13:14:49.519855 1186705 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 13:14:49.562960 1186705 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 13:14:49.595591 1186705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:14:49.913991 1186705 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 13:14:51.449828 1186705 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.535738589s)
	I0311 13:14:51.449860 1186705 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 13:14:51.449928 1186705 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 13:14:51.454112 1186705 start.go:562] Will wait 60s for crictl version
	I0311 13:14:51.454178 1186705 ssh_runner.go:195] Run: which crictl
	I0311 13:14:51.465404 1186705 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 13:14:51.543039 1186705 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0311 13:14:51.543131 1186705 ssh_runner.go:195] Run: crio --version
	I0311 13:14:51.621882 1186705 ssh_runner.go:195] Run: crio --version
	I0311 13:14:51.698477 1186705 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0311 13:14:51.700285 1186705 out.go:177]   - env NO_PROXY=192.168.49.2
	I0311 13:14:51.702772 1186705 cli_runner.go:164] Run: docker network inspect ha-992796 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0311 13:14:51.736334 1186705 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0311 13:14:51.757851 1186705 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 13:14:51.807683 1186705 mustload.go:65] Loading cluster: ha-992796
	I0311 13:14:51.807933 1186705 config.go:182] Loaded profile config "ha-992796": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 13:14:51.808200 1186705 cli_runner.go:164] Run: docker container inspect ha-992796 --format={{.State.Status}}
	I0311 13:14:51.844232 1186705 host.go:66] Checking if "ha-992796" exists ...
	I0311 13:14:51.844506 1186705 certs.go:68] Setting up /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796 for IP: 192.168.49.3
	I0311 13:14:51.844521 1186705 certs.go:194] generating shared ca certs ...
	I0311 13:14:51.844537 1186705 certs.go:226] acquiring lock for ca certs: {Name:mk30659f158a045ae3a6809b62fbd61891660c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:14:51.844652 1186705 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.key
	I0311 13:14:51.844706 1186705 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/proxy-client-ca.key
	I0311 13:14:51.844720 1186705 certs.go:256] generating profile certs ...
	I0311 13:14:51.844803 1186705 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/client.key
	I0311 13:14:51.844867 1186705 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/apiserver.key.6131eb3b
	I0311 13:14:51.844912 1186705 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/proxy-client.key
	I0311 13:14:51.844924 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0311 13:14:51.844936 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0311 13:14:51.844957 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0311 13:14:51.844967 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0311 13:14:51.844982 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0311 13:14:51.844997 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0311 13:14:51.845008 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0311 13:14:51.845025 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0311 13:14:51.845071 1186705 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/1129906.pem (1338 bytes)
	W0311 13:14:51.845102 1186705 certs.go:480] ignoring /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/1129906_empty.pem, impossibly tiny 0 bytes
	I0311 13:14:51.845114 1186705 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 13:14:51.845142 1186705 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca.pem (1078 bytes)
	I0311 13:14:51.845167 1186705 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/cert.pem (1123 bytes)
	I0311 13:14:51.845194 1186705 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/key.pem (1675 bytes)
	I0311 13:14:51.845238 1186705 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/files/etc/ssl/certs/11299062.pem (1708 bytes)
	I0311 13:14:51.845330 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/files/etc/ssl/certs/11299062.pem -> /usr/share/ca-certificates/11299062.pem
	I0311 13:14:51.845370 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0311 13:14:51.845381 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/1129906.pem -> /usr/share/ca-certificates/1129906.pem
	I0311 13:14:51.845458 1186705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796
	I0311 13:14:51.873657 1186705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33992 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/ha-992796/id_rsa Username:docker}
	I0311 13:14:51.997690 1186705 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0311 13:14:52.008722 1186705 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0311 13:14:52.035370 1186705 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0311 13:14:52.039678 1186705 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0311 13:14:52.065115 1186705 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0311 13:14:52.076408 1186705 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0311 13:14:52.094723 1186705 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0311 13:14:52.106956 1186705 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0311 13:14:52.127267 1186705 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0311 13:14:52.139730 1186705 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0311 13:14:52.179243 1186705 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0311 13:14:52.192657 1186705 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0311 13:14:52.221546 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 13:14:52.246743 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 13:14:52.275747 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 13:14:52.303384 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 13:14:52.332926 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0311 13:14:52.359074 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0311 13:14:52.385242 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 13:14:52.417119 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0311 13:14:52.445234 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/files/etc/ssl/certs/11299062.pem --> /usr/share/ca-certificates/11299062.pem (1708 bytes)
	I0311 13:14:52.471457 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 13:14:52.498985 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/1129906.pem --> /usr/share/ca-certificates/1129906.pem (1338 bytes)
	I0311 13:14:52.535326 1186705 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0311 13:14:52.556275 1186705 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0311 13:14:52.577572 1186705 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0311 13:14:52.598696 1186705 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0311 13:14:52.619612 1186705 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0311 13:14:52.646245 1186705 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0311 13:14:52.691879 1186705 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0311 13:14:52.720025 1186705 ssh_runner.go:195] Run: openssl version
	I0311 13:14:52.730168 1186705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1129906.pem && ln -fs /usr/share/ca-certificates/1129906.pem /etc/ssl/certs/1129906.pem"
	I0311 13:14:52.746750 1186705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1129906.pem
	I0311 13:14:52.754820 1186705 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 13:02 /usr/share/ca-certificates/1129906.pem
	I0311 13:14:52.754915 1186705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1129906.pem
	I0311 13:14:52.764696 1186705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1129906.pem /etc/ssl/certs/51391683.0"
	I0311 13:14:52.774548 1186705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11299062.pem && ln -fs /usr/share/ca-certificates/11299062.pem /etc/ssl/certs/11299062.pem"
	I0311 13:14:52.784936 1186705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11299062.pem
	I0311 13:14:52.788778 1186705 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 13:02 /usr/share/ca-certificates/11299062.pem
	I0311 13:14:52.788866 1186705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11299062.pem
	I0311 13:14:52.796363 1186705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11299062.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 13:14:52.807550 1186705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 13:14:52.817735 1186705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 13:14:52.822082 1186705 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 12:54 /usr/share/ca-certificates/minikubeCA.pem
	I0311 13:14:52.822171 1186705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 13:14:52.829490 1186705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 13:14:52.842100 1186705 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 13:14:52.846129 1186705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 13:14:52.853198 1186705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 13:14:52.863236 1186705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 13:14:52.870192 1186705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 13:14:52.882991 1186705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 13:14:52.893527 1186705 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 13:14:52.901267 1186705 kubeadm.go:928] updating node {m02 192.168.49.3 8443 v1.28.4 crio true true} ...
	I0311 13:14:52.901487 1186705 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-992796-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-992796 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 13:14:52.901556 1186705 kube-vip.go:101] generating kube-vip config ...
	I0311 13:14:52.901609 1186705 kube-vip.go:121] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0311 13:14:52.901705 1186705 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 13:14:52.911026 1186705 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 13:14:52.911138 1186705 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0311 13:14:52.919873 1186705 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0311 13:14:52.939923 1186705 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 13:14:52.963779 1186705 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1263 bytes)
	I0311 13:14:52.985192 1186705 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0311 13:14:52.989178 1186705 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 13:14:53.002241 1186705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:14:53.186809 1186705 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 13:14:53.205905 1186705 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 13:14:53.211792 1186705 out.go:177] * Verifying Kubernetes components...
	I0311 13:14:53.206340 1186705 config.go:182] Loaded profile config "ha-992796": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 13:14:53.213976 1186705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:14:53.360759 1186705 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 13:14:53.377181 1186705 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18350-1124504/kubeconfig
	I0311 13:14:53.377731 1186705 kapi.go:59] client config for ha-992796: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/client.crt", KeyFile:"/home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/client.key", CAFile:"/home/jenkins/minikube-integration/18350-1124504/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16fd7a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0311 13:14:53.377814 1186705 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0311 13:14:53.378138 1186705 node_ready.go:35] waiting up to 6m0s for node "ha-992796-m02" to be "Ready" ...
	I0311 13:14:53.378286 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m02
	I0311 13:14:53.378300 1186705 round_trippers.go:469] Request Headers:
	I0311 13:14:53.378311 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:14:53.378314 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:04.621017 1186705 round_trippers.go:574] Response Status: 500 Internal Server Error in 11242 milliseconds
	I0311 13:15:04.621234 1186705 node_ready.go:53] error getting node "ha-992796-m02": etcdserver: request timed out
	I0311 13:15:04.621288 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m02
	I0311 13:15:04.621292 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:04.621300 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:04.621303 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:16.717994 1186705 round_trippers.go:574] Response Status: 500 Internal Server Error in 12096 milliseconds
	I0311 13:15:16.718613 1186705 node_ready.go:53] error getting node "ha-992796-m02": etcdserver: leader changed
	I0311 13:15:16.718726 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m02
	I0311 13:15:16.718738 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:16.718747 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:16.718750 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:16.730296 1186705 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0311 13:15:16.732760 1186705 node_ready.go:49] node "ha-992796-m02" has status "Ready":"True"
	I0311 13:15:16.732788 1186705 node_ready.go:38] duration metric: took 23.354577629s for node "ha-992796-m02" to be "Ready" ...
	I0311 13:15:16.732799 1186705 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 13:15:16.732872 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0311 13:15:16.732884 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:16.732892 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:16.732904 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:16.760309 1186705 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0311 13:15:16.770976 1186705 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2qpt7" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:16.771095 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qpt7
	I0311 13:15:16.771106 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:16.771116 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:16.771123 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:16.774195 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:15:16.774876 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:15:16.774896 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:16.774904 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:16.774908 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:16.777692 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:15:16.778277 1186705 pod_ready.go:92] pod "coredns-5dd5756b68-2qpt7" in "kube-system" namespace has status "Ready":"True"
	I0311 13:15:16.778301 1186705 pod_ready.go:81] duration metric: took 7.290213ms for pod "coredns-5dd5756b68-2qpt7" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:16.778313 1186705 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mqfn8" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:16.778372 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mqfn8
	I0311 13:15:16.778382 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:16.778390 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:16.778395 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:16.781009 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:15:16.781906 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:15:16.781926 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:16.781936 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:16.781941 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:16.784464 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:15:16.785307 1186705 pod_ready.go:92] pod "coredns-5dd5756b68-mqfn8" in "kube-system" namespace has status "Ready":"True"
	I0311 13:15:16.785362 1186705 pod_ready.go:81] duration metric: took 7.041194ms for pod "coredns-5dd5756b68-mqfn8" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:16.785386 1186705 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-992796" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:16.785462 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-992796
	I0311 13:15:16.785472 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:16.785481 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:16.785485 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:16.788399 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:15:16.789203 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:15:16.789223 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:16.789232 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:16.789237 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:16.792001 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:15:16.792885 1186705 pod_ready.go:92] pod "etcd-ha-992796" in "kube-system" namespace has status "Ready":"True"
	I0311 13:15:16.792906 1186705 pod_ready.go:81] duration metric: took 7.512057ms for pod "etcd-ha-992796" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:16.792918 1186705 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-992796-m02" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:16.792979 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-992796-m02
	I0311 13:15:16.792989 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:16.792998 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:16.793001 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:16.795776 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:15:16.796631 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m02
	I0311 13:15:16.796648 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:16.796656 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:16.796660 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:16.799373 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:15:16.800161 1186705 pod_ready.go:92] pod "etcd-ha-992796-m02" in "kube-system" namespace has status "Ready":"True"
	I0311 13:15:16.800182 1186705 pod_ready.go:81] duration metric: took 7.254242ms for pod "etcd-ha-992796-m02" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:16.800193 1186705 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-992796-m03" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:16.919520 1186705 request.go:629] Waited for 119.24088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-992796-m03
	I0311 13:15:16.919582 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-992796-m03
	I0311 13:15:16.919594 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:16.919603 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:16.919610 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:16.922632 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:15:17.119569 1186705 request.go:629] Waited for 196.32425ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-992796-m03
	I0311 13:15:17.119625 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m03
	I0311 13:15:17.119631 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:17.119640 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:17.119684 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:17.123082 1186705 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0311 13:15:17.123394 1186705 pod_ready.go:97] node "ha-992796-m03" hosting pod "etcd-ha-992796-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-992796-m03": nodes "ha-992796-m03" not found
	I0311 13:15:17.123435 1186705 pod_ready.go:81] duration metric: took 323.230077ms for pod "etcd-ha-992796-m03" in "kube-system" namespace to be "Ready" ...
	E0311 13:15:17.123449 1186705 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-992796-m03" hosting pod "etcd-ha-992796-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-992796-m03": nodes "ha-992796-m03" not found
	I0311 13:15:17.123471 1186705 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-992796" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:17.319816 1186705 request.go:629] Waited for 196.204688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-992796
	I0311 13:15:17.319927 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-992796
	I0311 13:15:17.319948 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:17.319990 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:17.320008 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:17.323223 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:15:17.519481 1186705 request.go:629] Waited for 195.345757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:15:17.519559 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:15:17.519571 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:17.519580 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:17.519585 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:17.522512 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:15:17.523085 1186705 pod_ready.go:92] pod "kube-apiserver-ha-992796" in "kube-system" namespace has status "Ready":"True"
	I0311 13:15:17.523109 1186705 pod_ready.go:81] duration metric: took 399.622118ms for pod "kube-apiserver-ha-992796" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:17.523121 1186705 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-992796-m02" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:17.719304 1186705 request.go:629] Waited for 196.121514ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-992796-m02
	I0311 13:15:17.719411 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-992796-m02
	I0311 13:15:17.719447 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:17.719489 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:17.719510 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:17.729672 1186705 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0311 13:15:17.918926 1186705 request.go:629] Waited for 188.246948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-992796-m02
	I0311 13:15:17.918986 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m02
	I0311 13:15:17.919024 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:17.919034 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:17.919038 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:17.922395 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:15:17.923123 1186705 pod_ready.go:92] pod "kube-apiserver-ha-992796-m02" in "kube-system" namespace has status "Ready":"True"
	I0311 13:15:17.923146 1186705 pod_ready.go:81] duration metric: took 400.017636ms for pod "kube-apiserver-ha-992796-m02" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:17.923159 1186705 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-992796-m03" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:18.119084 1186705 request.go:629] Waited for 195.852329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-992796-m03
	I0311 13:15:18.119201 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-992796-m03
	I0311 13:15:18.119213 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:18.119225 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:18.119229 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:18.122369 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:15:18.319598 1186705 request.go:629] Waited for 196.317883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-992796-m03
	I0311 13:15:18.319717 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m03
	I0311 13:15:18.319739 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:18.319761 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:18.319781 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:18.323429 1186705 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0311 13:15:18.323730 1186705 pod_ready.go:97] node "ha-992796-m03" hosting pod "kube-apiserver-ha-992796-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-992796-m03": nodes "ha-992796-m03" not found
	I0311 13:15:18.323779 1186705 pod_ready.go:81] duration metric: took 400.612173ms for pod "kube-apiserver-ha-992796-m03" in "kube-system" namespace to be "Ready" ...
	E0311 13:15:18.323805 1186705 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-992796-m03" hosting pod "kube-apiserver-ha-992796-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-992796-m03": nodes "ha-992796-m03" not found
	I0311 13:15:18.323827 1186705 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-992796" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:18.519533 1186705 request.go:629] Waited for 195.612506ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-992796
	I0311 13:15:18.519740 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-992796
	I0311 13:15:18.519754 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:18.519763 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:18.519768 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:18.527174 1186705 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0311 13:15:18.719628 1186705 request.go:629] Waited for 191.129744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:15:18.719736 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:15:18.719758 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:18.719796 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:18.719816 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:18.723713 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:15:18.724706 1186705 pod_ready.go:92] pod "kube-controller-manager-ha-992796" in "kube-system" namespace has status "Ready":"True"
	I0311 13:15:18.724764 1186705 pod_ready.go:81] duration metric: took 400.903792ms for pod "kube-controller-manager-ha-992796" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:18.724790 1186705 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-992796-m02" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:18.919638 1186705 request.go:629] Waited for 194.750744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-992796-m02
	I0311 13:15:18.919756 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-992796-m02
	I0311 13:15:18.919789 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:18.919821 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:18.919839 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:18.923133 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:15:19.119497 1186705 request.go:629] Waited for 195.343402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-992796-m02
	I0311 13:15:19.119573 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m02
	I0311 13:15:19.119585 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:19.119595 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:19.119604 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:19.122564 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:15:19.123592 1186705 pod_ready.go:92] pod "kube-controller-manager-ha-992796-m02" in "kube-system" namespace has status "Ready":"True"
	I0311 13:15:19.123650 1186705 pod_ready.go:81] duration metric: took 398.838894ms for pod "kube-controller-manager-ha-992796-m02" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:19.123678 1186705 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-992796-m03" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:19.319530 1186705 request.go:629] Waited for 195.765027ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-992796-m03
	I0311 13:15:19.319644 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-992796-m03
	I0311 13:15:19.319665 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:19.319745 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:19.319775 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:19.326972 1186705 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0311 13:15:19.519575 1186705 request.go:629] Waited for 191.31712ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-992796-m03
	I0311 13:15:19.519660 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m03
	I0311 13:15:19.519672 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:19.519726 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:19.519736 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:19.522783 1186705 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0311 13:15:19.523126 1186705 pod_ready.go:97] node "ha-992796-m03" hosting pod "kube-controller-manager-ha-992796-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-992796-m03": nodes "ha-992796-m03" not found
	I0311 13:15:19.523152 1186705 pod_ready.go:81] duration metric: took 399.451005ms for pod "kube-controller-manager-ha-992796-m03" in "kube-system" namespace to be "Ready" ...
	E0311 13:15:19.523180 1186705 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-992796-m03" hosting pod "kube-controller-manager-ha-992796-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-992796-m03": nodes "ha-992796-m03" not found
	I0311 13:15:19.523203 1186705 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2p8p9" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:19.719375 1186705 request.go:629] Waited for 196.084987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2p8p9
	I0311 13:15:19.719470 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2p8p9
	I0311 13:15:19.719529 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:19.719548 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:19.719563 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:19.725791 1186705 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0311 13:15:19.918823 1186705 request.go:629] Waited for 192.206319ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:15:19.918910 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:15:19.918922 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:19.918975 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:19.918982 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:19.922338 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:15:19.923348 1186705 pod_ready.go:92] pod "kube-proxy-2p8p9" in "kube-system" namespace has status "Ready":"True"
	I0311 13:15:19.923378 1186705 pod_ready.go:81] duration metric: took 400.158555ms for pod "kube-proxy-2p8p9" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:19.923390 1186705 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5rxbt" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:20.119137 1186705 request.go:629] Waited for 195.646656ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5rxbt
	I0311 13:15:20.119250 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5rxbt
	I0311 13:15:20.119283 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:20.119294 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:20.119316 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:20.133375 1186705 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0311 13:15:20.319709 1186705 request.go:629] Waited for 185.31342ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-992796-m04
	I0311 13:15:20.319822 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m04
	I0311 13:15:20.319838 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:20.319848 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:20.319864 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:20.322915 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:15:20.323537 1186705 pod_ready.go:92] pod "kube-proxy-5rxbt" in "kube-system" namespace has status "Ready":"True"
	I0311 13:15:20.323556 1186705 pod_ready.go:81] duration metric: took 400.137706ms for pod "kube-proxy-5rxbt" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:20.323569 1186705 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6dzwv" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:20.518919 1186705 request.go:629] Waited for 195.231988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6dzwv
	I0311 13:15:20.519004 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6dzwv
	I0311 13:15:20.519030 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:20.519056 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:20.519062 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:20.522192 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:15:20.719075 1186705 request.go:629] Waited for 196.246427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-992796-m02
	I0311 13:15:20.719202 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m02
	I0311 13:15:20.719214 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:20.719224 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:20.719233 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:20.723346 1186705 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 13:15:20.723949 1186705 pod_ready.go:92] pod "kube-proxy-6dzwv" in "kube-system" namespace has status "Ready":"True"
	I0311 13:15:20.723970 1186705 pod_ready.go:81] duration metric: took 400.393971ms for pod "kube-proxy-6dzwv" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:20.723983 1186705 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pb9kg" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:20.919796 1186705 request.go:629] Waited for 195.745098ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pb9kg
	I0311 13:15:20.919857 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pb9kg
	I0311 13:15:20.919871 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:20.919881 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:20.919890 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:20.923130 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:15:21.118923 1186705 request.go:629] Waited for 195.141249ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-992796-m03
	I0311 13:15:21.118992 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m03
	I0311 13:15:21.119003 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:21.119014 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:21.119037 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:21.122798 1186705 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0311 13:15:21.122981 1186705 pod_ready.go:97] node "ha-992796-m03" hosting pod "kube-proxy-pb9kg" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-992796-m03": nodes "ha-992796-m03" not found
	I0311 13:15:21.123005 1186705 pod_ready.go:81] duration metric: took 399.014249ms for pod "kube-proxy-pb9kg" in "kube-system" namespace to be "Ready" ...
	E0311 13:15:21.123015 1186705 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-992796-m03" hosting pod "kube-proxy-pb9kg" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-992796-m03": nodes "ha-992796-m03" not found
	I0311 13:15:21.123024 1186705 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-992796" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:21.319455 1186705 request.go:629] Waited for 196.352369ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-992796
	I0311 13:15:21.319546 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-992796
	I0311 13:15:21.319617 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:21.319627 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:21.319631 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:21.324078 1186705 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 13:15:21.519431 1186705 request.go:629] Waited for 194.333353ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:15:21.519521 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:15:21.519535 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:21.519544 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:21.519548 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:21.522773 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:15:21.523551 1186705 pod_ready.go:92] pod "kube-scheduler-ha-992796" in "kube-system" namespace has status "Ready":"True"
	I0311 13:15:21.523574 1186705 pod_ready.go:81] duration metric: took 400.540486ms for pod "kube-scheduler-ha-992796" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:21.523587 1186705 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-992796-m02" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:21.718925 1186705 request.go:629] Waited for 195.263896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-992796-m02
	I0311 13:15:21.719007 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-992796-m02
	I0311 13:15:21.719024 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:21.719034 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:21.719039 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:21.722340 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:15:21.919636 1186705 request.go:629] Waited for 196.329206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-992796-m02
	I0311 13:15:21.919701 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m02
	I0311 13:15:21.919715 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:21.919724 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:21.919737 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:21.922686 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:15:21.923287 1186705 pod_ready.go:92] pod "kube-scheduler-ha-992796-m02" in "kube-system" namespace has status "Ready":"True"
	I0311 13:15:21.923308 1186705 pod_ready.go:81] duration metric: took 399.714202ms for pod "kube-scheduler-ha-992796-m02" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:21.923320 1186705 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-992796-m03" in "kube-system" namespace to be "Ready" ...
	I0311 13:15:22.119307 1186705 request.go:629] Waited for 195.923063ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-992796-m03
	I0311 13:15:22.119405 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-992796-m03
	I0311 13:15:22.119417 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:22.119426 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:22.119430 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:22.122397 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:15:22.319182 1186705 request.go:629] Waited for 196.232314ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-992796-m03
	I0311 13:15:22.319245 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m03
	I0311 13:15:22.319254 1186705 round_trippers.go:469] Request Headers:
	I0311 13:15:22.319263 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:15:22.319270 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:15:22.322193 1186705 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0311 13:15:22.322628 1186705 pod_ready.go:97] node "ha-992796-m03" hosting pod "kube-scheduler-ha-992796-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-992796-m03": nodes "ha-992796-m03" not found
	I0311 13:15:22.322655 1186705 pod_ready.go:81] duration metric: took 399.327004ms for pod "kube-scheduler-ha-992796-m03" in "kube-system" namespace to be "Ready" ...
	E0311 13:15:22.322666 1186705 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-992796-m03" hosting pod "kube-scheduler-ha-992796-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-992796-m03": nodes "ha-992796-m03" not found
	I0311 13:15:22.322676 1186705 pod_ready.go:38] duration metric: took 5.589865001s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 13:15:22.322697 1186705 api_server.go:52] waiting for apiserver process to appear ...
	I0311 13:15:22.322759 1186705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 13:15:22.333817 1186705 api_server.go:72] duration metric: took 29.127862627s to wait for apiserver process to appear ...
	I0311 13:15:22.333845 1186705 api_server.go:88] waiting for apiserver healthz status ...
	I0311 13:15:22.333866 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:22.343576 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:22.343605 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:22.833989 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:22.842357 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:22.842389 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:23.334726 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:23.343644 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:23.343672 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:23.834530 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:23.843513 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:23.843543 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:24.334063 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:24.342431 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:24.342459 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:24.834885 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:24.844138 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:24.844168 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:25.334884 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:25.344502 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:25.344533 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:25.834100 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:25.842508 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:25.842534 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:26.334015 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:26.342550 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:26.342577 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:26.834015 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:26.842682 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:26.842708 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:27.334240 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:27.342674 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:27.342704 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:27.834202 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:27.842565 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:27.842592 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:28.334225 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:28.342969 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:28.343012 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:28.834446 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:28.856073 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:28.856104 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:29.334525 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:29.360709 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:29.360740 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:29.833987 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:29.843330 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:29.843361 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:30.334781 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:30.345613 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:30.345651 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:30.834003 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:30.843874 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:30.843910 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:31.334612 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:31.343901 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:31.343924 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:31.834491 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:31.843203 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:31.843228 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:32.334793 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:32.344280 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:32.344309 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:32.834926 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:32.843677 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:32.843714 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:33.334206 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:33.346014 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:33.346049 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:33.833947 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:33.843669 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:33.843703 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:34.334208 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:34.344943 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:34.344988 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:34.834460 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:34.843769 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:34.843800 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:35.334492 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:35.343202 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:35.343230 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:35.834786 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:35.843286 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:35.843316 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:36.334810 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:36.344627 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:36.344661 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:36.833998 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:36.842936 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:36.842964 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:37.334460 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:37.344587 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:37.344628 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:37.833990 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:37.850244 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:37.850299 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:38.334460 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:38.343973 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:38.343998 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:38.834016 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:38.847160 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:38.847194 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:39.334497 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:39.343928 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:39.343959 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:39.834562 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:39.843518 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:39.843551 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:40.333977 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:40.342727 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:40.342768 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:40.834328 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:40.843731 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:40.843765 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:41.334482 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:41.343609 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:41.343639 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:41.834236 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:41.842707 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:41.842736 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:42.334019 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:42.346123 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:42.346156 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:42.834265 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:42.844150 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:42.844204 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:43.334700 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:43.343691 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:43.343742 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:43.834130 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:43.855198 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:43.855230 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:44.334854 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:44.343469 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:44.343556 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:44.834060 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:44.843953 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:44.843980 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:45.334603 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:45.343637 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:45.343673 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:45.834926 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:45.843458 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:45.843497 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:46.333987 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:46.342446 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:46.342471 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:46.833998 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:46.842531 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:46.842565 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:47.334047 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:47.342495 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:47.342526 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:47.834042 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:47.842913 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:47.842990 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:48.334338 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:48.343044 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:48.343074 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:48.834786 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:48.843308 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:48.843337 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:49.334627 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:49.343224 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:49.343249 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:49.834826 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:49.843390 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:49.843417 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:50.334564 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:50.343430 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:50.343472 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:50.833970 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:50.842969 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:50.843000 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:51.334721 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:51.343232 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:51.343259 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:51.834796 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:51.843798 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:51.843826 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:52.334905 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:52.345077 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:52.345105 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:52.834688 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:52.843706 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 13:15:52.843744 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 13:15:53.334532 1186705 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 13:15:53.334628 1186705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 13:15:53.377458 1186705 cri.go:89] found id: "9c725adb8756d09b9f65abf1086d1332b43e038cdd6fcb354dd4d2f7e43bbbc0"
	I0311 13:15:53.377483 1186705 cri.go:89] found id: "b7782589e99f72ebba4d6f9a59a587dc8172acb9cdcfec29c1441eba17134174"
	I0311 13:15:53.377488 1186705 cri.go:89] found id: ""
	I0311 13:15:53.377496 1186705 logs.go:276] 2 containers: [9c725adb8756d09b9f65abf1086d1332b43e038cdd6fcb354dd4d2f7e43bbbc0 b7782589e99f72ebba4d6f9a59a587dc8172acb9cdcfec29c1441eba17134174]
	I0311 13:15:53.377552 1186705 ssh_runner.go:195] Run: which crictl
	I0311 13:15:53.381162 1186705 ssh_runner.go:195] Run: which crictl
	I0311 13:15:53.384599 1186705 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 13:15:53.384672 1186705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 13:15:53.425746 1186705 cri.go:89] found id: "668d1483b1904f49ec8d03a57a60a7d417df9d0dd0ff786a03f331824d63bc46"
	I0311 13:15:53.425770 1186705 cri.go:89] found id: "5199959bd27d7c08d8c52ba5d4a143096131271f565e3bb8b5f965cc47c0d49e"
	I0311 13:15:53.425775 1186705 cri.go:89] found id: ""
	I0311 13:15:53.425782 1186705 logs.go:276] 2 containers: [668d1483b1904f49ec8d03a57a60a7d417df9d0dd0ff786a03f331824d63bc46 5199959bd27d7c08d8c52ba5d4a143096131271f565e3bb8b5f965cc47c0d49e]
	I0311 13:15:53.425848 1186705 ssh_runner.go:195] Run: which crictl
	I0311 13:15:53.429262 1186705 ssh_runner.go:195] Run: which crictl
	I0311 13:15:53.432565 1186705 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 13:15:53.432672 1186705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 13:15:53.471217 1186705 cri.go:89] found id: ""
	I0311 13:15:53.471240 1186705 logs.go:276] 0 containers: []
	W0311 13:15:53.471248 1186705 logs.go:278] No container was found matching "coredns"
	I0311 13:15:53.471254 1186705 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 13:15:53.471317 1186705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 13:15:53.509942 1186705 cri.go:89] found id: "abbcb417c383c6a3186ff5e9340a3253c4fbc08fe7c2abbc53f8bd8d891d279b"
	I0311 13:15:53.509967 1186705 cri.go:89] found id: "3b9561677296bffdeb8d086e72eb59d7747e0f37210acc3ede746931c3a0cfdb"
	I0311 13:15:53.509973 1186705 cri.go:89] found id: ""
	I0311 13:15:53.509980 1186705 logs.go:276] 2 containers: [abbcb417c383c6a3186ff5e9340a3253c4fbc08fe7c2abbc53f8bd8d891d279b 3b9561677296bffdeb8d086e72eb59d7747e0f37210acc3ede746931c3a0cfdb]
	I0311 13:15:53.510035 1186705 ssh_runner.go:195] Run: which crictl
	I0311 13:15:53.513517 1186705 ssh_runner.go:195] Run: which crictl
	I0311 13:15:53.516811 1186705 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 13:15:53.516881 1186705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 13:15:53.554270 1186705 cri.go:89] found id: ""
	I0311 13:15:53.554337 1186705 logs.go:276] 0 containers: []
	W0311 13:15:53.554359 1186705 logs.go:278] No container was found matching "kube-proxy"
	I0311 13:15:53.554372 1186705 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 13:15:53.554446 1186705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 13:15:53.602190 1186705 cri.go:89] found id: "5c812b97058fef84bdde809aeb45c326c36e049b4889539d63e1f9feaba9c2f5"
	I0311 13:15:53.602211 1186705 cri.go:89] found id: "d6e4e2ef94890ae9a3184dd94a784a6d7194de85134f60dff0a7b5bdef62294d"
	I0311 13:15:53.602217 1186705 cri.go:89] found id: ""
	I0311 13:15:53.602224 1186705 logs.go:276] 2 containers: [5c812b97058fef84bdde809aeb45c326c36e049b4889539d63e1f9feaba9c2f5 d6e4e2ef94890ae9a3184dd94a784a6d7194de85134f60dff0a7b5bdef62294d]
	I0311 13:15:53.602304 1186705 ssh_runner.go:195] Run: which crictl
	I0311 13:15:53.606459 1186705 ssh_runner.go:195] Run: which crictl
	I0311 13:15:53.610519 1186705 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 13:15:53.610625 1186705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 13:15:53.650126 1186705 cri.go:89] found id: ""
	I0311 13:15:53.650197 1186705 logs.go:276] 0 containers: []
	W0311 13:15:53.650214 1186705 logs.go:278] No container was found matching "kindnet"
	I0311 13:15:53.650225 1186705 logs.go:123] Gathering logs for kube-apiserver [9c725adb8756d09b9f65abf1086d1332b43e038cdd6fcb354dd4d2f7e43bbbc0] ...
	I0311 13:15:53.650238 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c725adb8756d09b9f65abf1086d1332b43e038cdd6fcb354dd4d2f7e43bbbc0"
	I0311 13:15:53.702786 1186705 logs.go:123] Gathering logs for kube-scheduler [abbcb417c383c6a3186ff5e9340a3253c4fbc08fe7c2abbc53f8bd8d891d279b] ...
	I0311 13:15:53.702819 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abbcb417c383c6a3186ff5e9340a3253c4fbc08fe7c2abbc53f8bd8d891d279b"
	I0311 13:15:53.741514 1186705 logs.go:123] Gathering logs for kube-scheduler [3b9561677296bffdeb8d086e72eb59d7747e0f37210acc3ede746931c3a0cfdb] ...
	I0311 13:15:53.741539 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b9561677296bffdeb8d086e72eb59d7747e0f37210acc3ede746931c3a0cfdb"
	I0311 13:15:53.778106 1186705 logs.go:123] Gathering logs for kube-controller-manager [5c812b97058fef84bdde809aeb45c326c36e049b4889539d63e1f9feaba9c2f5] ...
	I0311 13:15:53.778186 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c812b97058fef84bdde809aeb45c326c36e049b4889539d63e1f9feaba9c2f5"
	I0311 13:15:53.829224 1186705 logs.go:123] Gathering logs for kube-controller-manager [d6e4e2ef94890ae9a3184dd94a784a6d7194de85134f60dff0a7b5bdef62294d] ...
	I0311 13:15:53.829262 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6e4e2ef94890ae9a3184dd94a784a6d7194de85134f60dff0a7b5bdef62294d"
	I0311 13:15:53.866453 1186705 logs.go:123] Gathering logs for kubelet ...
	I0311 13:15:53.866482 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:15:53.954401 1186705 logs.go:123] Gathering logs for dmesg ...
	I0311 13:15:53.954437 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:15:53.980780 1186705 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:15:53.980811 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:15:54.332818 1186705 logs.go:123] Gathering logs for kube-apiserver [b7782589e99f72ebba4d6f9a59a587dc8172acb9cdcfec29c1441eba17134174] ...
	I0311 13:15:54.332854 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7782589e99f72ebba4d6f9a59a587dc8172acb9cdcfec29c1441eba17134174"
	I0311 13:15:54.381891 1186705 logs.go:123] Gathering logs for etcd [668d1483b1904f49ec8d03a57a60a7d417df9d0dd0ff786a03f331824d63bc46] ...
	I0311 13:15:54.381921 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 668d1483b1904f49ec8d03a57a60a7d417df9d0dd0ff786a03f331824d63bc46"
	I0311 13:15:54.455239 1186705 logs.go:123] Gathering logs for etcd [5199959bd27d7c08d8c52ba5d4a143096131271f565e3bb8b5f965cc47c0d49e] ...
	I0311 13:15:54.455277 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5199959bd27d7c08d8c52ba5d4a143096131271f565e3bb8b5f965cc47c0d49e"
	I0311 13:15:54.553472 1186705 logs.go:123] Gathering logs for CRI-O ...
	I0311 13:15:54.553508 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 13:15:54.627155 1186705 logs.go:123] Gathering logs for container status ...
	I0311 13:15:54.627190 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:15:57.202902 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:15:57.390188 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 13:15:57.390261 1186705 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 13:15:57.390302 1186705 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 13:15:57.390392 1186705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 13:15:57.494030 1186705 cri.go:89] found id: "9c725adb8756d09b9f65abf1086d1332b43e038cdd6fcb354dd4d2f7e43bbbc0"
	I0311 13:15:57.494102 1186705 cri.go:89] found id: "b7782589e99f72ebba4d6f9a59a587dc8172acb9cdcfec29c1441eba17134174"
	I0311 13:15:57.494121 1186705 cri.go:89] found id: ""
	I0311 13:15:57.494143 1186705 logs.go:276] 2 containers: [9c725adb8756d09b9f65abf1086d1332b43e038cdd6fcb354dd4d2f7e43bbbc0 b7782589e99f72ebba4d6f9a59a587dc8172acb9cdcfec29c1441eba17134174]
	I0311 13:15:57.494225 1186705 ssh_runner.go:195] Run: which crictl
	I0311 13:15:57.498412 1186705 ssh_runner.go:195] Run: which crictl
	I0311 13:15:57.505488 1186705 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 13:15:57.505612 1186705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 13:15:57.573099 1186705 cri.go:89] found id: "668d1483b1904f49ec8d03a57a60a7d417df9d0dd0ff786a03f331824d63bc46"
	I0311 13:15:57.573169 1186705 cri.go:89] found id: "5199959bd27d7c08d8c52ba5d4a143096131271f565e3bb8b5f965cc47c0d49e"
	I0311 13:15:57.573188 1186705 cri.go:89] found id: ""
	I0311 13:15:57.573209 1186705 logs.go:276] 2 containers: [668d1483b1904f49ec8d03a57a60a7d417df9d0dd0ff786a03f331824d63bc46 5199959bd27d7c08d8c52ba5d4a143096131271f565e3bb8b5f965cc47c0d49e]
	I0311 13:15:57.573290 1186705 ssh_runner.go:195] Run: which crictl
	I0311 13:15:57.578485 1186705 ssh_runner.go:195] Run: which crictl
	I0311 13:15:57.582295 1186705 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 13:15:57.582416 1186705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 13:15:57.640693 1186705 cri.go:89] found id: ""
	I0311 13:15:57.640743 1186705 logs.go:276] 0 containers: []
	W0311 13:15:57.640768 1186705 logs.go:278] No container was found matching "coredns"
	I0311 13:15:57.640789 1186705 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 13:15:57.640893 1186705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 13:15:57.711832 1186705 cri.go:89] found id: "abbcb417c383c6a3186ff5e9340a3253c4fbc08fe7c2abbc53f8bd8d891d279b"
	I0311 13:15:57.711928 1186705 cri.go:89] found id: "3b9561677296bffdeb8d086e72eb59d7747e0f37210acc3ede746931c3a0cfdb"
	I0311 13:15:57.711950 1186705 cri.go:89] found id: ""
	I0311 13:15:57.711970 1186705 logs.go:276] 2 containers: [abbcb417c383c6a3186ff5e9340a3253c4fbc08fe7c2abbc53f8bd8d891d279b 3b9561677296bffdeb8d086e72eb59d7747e0f37210acc3ede746931c3a0cfdb]
	I0311 13:15:57.712057 1186705 ssh_runner.go:195] Run: which crictl
	I0311 13:15:57.721861 1186705 ssh_runner.go:195] Run: which crictl
	I0311 13:15:57.729748 1186705 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 13:15:57.729898 1186705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 13:15:57.823720 1186705 cri.go:89] found id: ""
	I0311 13:15:57.823793 1186705 logs.go:276] 0 containers: []
	W0311 13:15:57.823815 1186705 logs.go:278] No container was found matching "kube-proxy"
	I0311 13:15:57.823834 1186705 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 13:15:57.823925 1186705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 13:15:57.908612 1186705 cri.go:89] found id: "5c812b97058fef84bdde809aeb45c326c36e049b4889539d63e1f9feaba9c2f5"
	I0311 13:15:57.908681 1186705 cri.go:89] found id: "d6e4e2ef94890ae9a3184dd94a784a6d7194de85134f60dff0a7b5bdef62294d"
	I0311 13:15:57.908699 1186705 cri.go:89] found id: ""
	I0311 13:15:57.908720 1186705 logs.go:276] 2 containers: [5c812b97058fef84bdde809aeb45c326c36e049b4889539d63e1f9feaba9c2f5 d6e4e2ef94890ae9a3184dd94a784a6d7194de85134f60dff0a7b5bdef62294d]
	I0311 13:15:57.908803 1186705 ssh_runner.go:195] Run: which crictl
	I0311 13:15:57.914567 1186705 ssh_runner.go:195] Run: which crictl
	I0311 13:15:57.918366 1186705 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 13:15:57.918478 1186705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 13:15:57.995259 1186705 cri.go:89] found id: ""
	I0311 13:15:57.995323 1186705 logs.go:276] 0 containers: []
	W0311 13:15:57.995345 1186705 logs.go:278] No container was found matching "kindnet"
	I0311 13:15:57.995367 1186705 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:15:57.995407 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:15:58.351940 1186705 logs.go:123] Gathering logs for kube-apiserver [9c725adb8756d09b9f65abf1086d1332b43e038cdd6fcb354dd4d2f7e43bbbc0] ...
	I0311 13:15:58.351977 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c725adb8756d09b9f65abf1086d1332b43e038cdd6fcb354dd4d2f7e43bbbc0"
	I0311 13:15:58.417712 1186705 logs.go:123] Gathering logs for kube-apiserver [b7782589e99f72ebba4d6f9a59a587dc8172acb9cdcfec29c1441eba17134174] ...
	I0311 13:15:58.417743 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7782589e99f72ebba4d6f9a59a587dc8172acb9cdcfec29c1441eba17134174"
	I0311 13:15:58.469958 1186705 logs.go:123] Gathering logs for kube-scheduler [3b9561677296bffdeb8d086e72eb59d7747e0f37210acc3ede746931c3a0cfdb] ...
	I0311 13:15:58.469996 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b9561677296bffdeb8d086e72eb59d7747e0f37210acc3ede746931c3a0cfdb"
	I0311 13:15:58.516327 1186705 logs.go:123] Gathering logs for kube-controller-manager [5c812b97058fef84bdde809aeb45c326c36e049b4889539d63e1f9feaba9c2f5] ...
	I0311 13:15:58.516357 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c812b97058fef84bdde809aeb45c326c36e049b4889539d63e1f9feaba9c2f5"
	I0311 13:15:58.588921 1186705 logs.go:123] Gathering logs for kube-controller-manager [d6e4e2ef94890ae9a3184dd94a784a6d7194de85134f60dff0a7b5bdef62294d] ...
	I0311 13:15:58.588954 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6e4e2ef94890ae9a3184dd94a784a6d7194de85134f60dff0a7b5bdef62294d"
	I0311 13:15:58.635925 1186705 logs.go:123] Gathering logs for CRI-O ...
	I0311 13:15:58.635954 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 13:15:58.714946 1186705 logs.go:123] Gathering logs for kubelet ...
	I0311 13:15:58.714981 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:15:58.812931 1186705 logs.go:123] Gathering logs for etcd [668d1483b1904f49ec8d03a57a60a7d417df9d0dd0ff786a03f331824d63bc46] ...
	I0311 13:15:58.812966 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 668d1483b1904f49ec8d03a57a60a7d417df9d0dd0ff786a03f331824d63bc46"
	I0311 13:15:58.872898 1186705 logs.go:123] Gathering logs for etcd [5199959bd27d7c08d8c52ba5d4a143096131271f565e3bb8b5f965cc47c0d49e] ...
	I0311 13:15:58.872932 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5199959bd27d7c08d8c52ba5d4a143096131271f565e3bb8b5f965cc47c0d49e"
	I0311 13:15:58.934874 1186705 logs.go:123] Gathering logs for kube-scheduler [abbcb417c383c6a3186ff5e9340a3253c4fbc08fe7c2abbc53f8bd8d891d279b] ...
	I0311 13:15:58.934906 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abbcb417c383c6a3186ff5e9340a3253c4fbc08fe7c2abbc53f8bd8d891d279b"
	I0311 13:15:58.982257 1186705 logs.go:123] Gathering logs for container status ...
	I0311 13:15:58.982286 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:15:59.044144 1186705 logs.go:123] Gathering logs for dmesg ...
	I0311 13:15:59.044174 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:16:01.575976 1186705 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 13:16:01.586037 1186705 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0311 13:16:01.586112 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0311 13:16:01.586118 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:01.586127 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:01.586132 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:01.600517 1186705 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0311 13:16:01.600672 1186705 api_server.go:141] control plane version: v1.28.4
	I0311 13:16:01.600695 1186705 api_server.go:131] duration metric: took 39.266843058s to wait for apiserver health ...
	I0311 13:16:01.600715 1186705 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 13:16:01.600742 1186705 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 13:16:01.600812 1186705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 13:16:01.647011 1186705 cri.go:89] found id: "9c725adb8756d09b9f65abf1086d1332b43e038cdd6fcb354dd4d2f7e43bbbc0"
	I0311 13:16:01.647035 1186705 cri.go:89] found id: "b7782589e99f72ebba4d6f9a59a587dc8172acb9cdcfec29c1441eba17134174"
	I0311 13:16:01.647040 1186705 cri.go:89] found id: ""
	I0311 13:16:01.647047 1186705 logs.go:276] 2 containers: [9c725adb8756d09b9f65abf1086d1332b43e038cdd6fcb354dd4d2f7e43bbbc0 b7782589e99f72ebba4d6f9a59a587dc8172acb9cdcfec29c1441eba17134174]
	I0311 13:16:01.647103 1186705 ssh_runner.go:195] Run: which crictl
	I0311 13:16:01.651039 1186705 ssh_runner.go:195] Run: which crictl
	I0311 13:16:01.654537 1186705 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 13:16:01.654609 1186705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 13:16:01.699806 1186705 cri.go:89] found id: "668d1483b1904f49ec8d03a57a60a7d417df9d0dd0ff786a03f331824d63bc46"
	I0311 13:16:01.699827 1186705 cri.go:89] found id: "5199959bd27d7c08d8c52ba5d4a143096131271f565e3bb8b5f965cc47c0d49e"
	I0311 13:16:01.699832 1186705 cri.go:89] found id: ""
	I0311 13:16:01.699839 1186705 logs.go:276] 2 containers: [668d1483b1904f49ec8d03a57a60a7d417df9d0dd0ff786a03f331824d63bc46 5199959bd27d7c08d8c52ba5d4a143096131271f565e3bb8b5f965cc47c0d49e]
	I0311 13:16:01.699905 1186705 ssh_runner.go:195] Run: which crictl
	I0311 13:16:01.704490 1186705 ssh_runner.go:195] Run: which crictl
	I0311 13:16:01.708249 1186705 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 13:16:01.708318 1186705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 13:16:01.762923 1186705 cri.go:89] found id: ""
	I0311 13:16:01.762946 1186705 logs.go:276] 0 containers: []
	W0311 13:16:01.762955 1186705 logs.go:278] No container was found matching "coredns"
	I0311 13:16:01.762986 1186705 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 13:16:01.763065 1186705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 13:16:01.801055 1186705 cri.go:89] found id: "abbcb417c383c6a3186ff5e9340a3253c4fbc08fe7c2abbc53f8bd8d891d279b"
	I0311 13:16:01.801125 1186705 cri.go:89] found id: "3b9561677296bffdeb8d086e72eb59d7747e0f37210acc3ede746931c3a0cfdb"
	I0311 13:16:01.801144 1186705 cri.go:89] found id: ""
	I0311 13:16:01.801166 1186705 logs.go:276] 2 containers: [abbcb417c383c6a3186ff5e9340a3253c4fbc08fe7c2abbc53f8bd8d891d279b 3b9561677296bffdeb8d086e72eb59d7747e0f37210acc3ede746931c3a0cfdb]
	I0311 13:16:01.801250 1186705 ssh_runner.go:195] Run: which crictl
	I0311 13:16:01.804999 1186705 ssh_runner.go:195] Run: which crictl
	I0311 13:16:01.808340 1186705 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 13:16:01.808421 1186705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 13:16:01.847262 1186705 cri.go:89] found id: ""
	I0311 13:16:01.847285 1186705 logs.go:276] 0 containers: []
	W0311 13:16:01.847294 1186705 logs.go:278] No container was found matching "kube-proxy"
	I0311 13:16:01.847300 1186705 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 13:16:01.847357 1186705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 13:16:01.884880 1186705 cri.go:89] found id: "5c812b97058fef84bdde809aeb45c326c36e049b4889539d63e1f9feaba9c2f5"
	I0311 13:16:01.884969 1186705 cri.go:89] found id: "d6e4e2ef94890ae9a3184dd94a784a6d7194de85134f60dff0a7b5bdef62294d"
	I0311 13:16:01.885011 1186705 cri.go:89] found id: ""
	I0311 13:16:01.885047 1186705 logs.go:276] 2 containers: [5c812b97058fef84bdde809aeb45c326c36e049b4889539d63e1f9feaba9c2f5 d6e4e2ef94890ae9a3184dd94a784a6d7194de85134f60dff0a7b5bdef62294d]
	I0311 13:16:01.885138 1186705 ssh_runner.go:195] Run: which crictl
	I0311 13:16:01.888868 1186705 ssh_runner.go:195] Run: which crictl
	I0311 13:16:01.892425 1186705 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 13:16:01.892513 1186705 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 13:16:01.935363 1186705 cri.go:89] found id: ""
	I0311 13:16:01.935387 1186705 logs.go:276] 0 containers: []
	W0311 13:16:01.935409 1186705 logs.go:278] No container was found matching "kindnet"
	I0311 13:16:01.935420 1186705 logs.go:123] Gathering logs for kubelet ...
	I0311 13:16:01.935432 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:16:02.028607 1186705 logs.go:123] Gathering logs for dmesg ...
	I0311 13:16:02.028644 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:16:02.050334 1186705 logs.go:123] Gathering logs for kube-apiserver [9c725adb8756d09b9f65abf1086d1332b43e038cdd6fcb354dd4d2f7e43bbbc0] ...
	I0311 13:16:02.050368 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c725adb8756d09b9f65abf1086d1332b43e038cdd6fcb354dd4d2f7e43bbbc0"
	I0311 13:16:02.123207 1186705 logs.go:123] Gathering logs for etcd [5199959bd27d7c08d8c52ba5d4a143096131271f565e3bb8b5f965cc47c0d49e] ...
	I0311 13:16:02.123242 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5199959bd27d7c08d8c52ba5d4a143096131271f565e3bb8b5f965cc47c0d49e"
	I0311 13:16:02.195632 1186705 logs.go:123] Gathering logs for kube-scheduler [abbcb417c383c6a3186ff5e9340a3253c4fbc08fe7c2abbc53f8bd8d891d279b] ...
	I0311 13:16:02.195671 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abbcb417c383c6a3186ff5e9340a3253c4fbc08fe7c2abbc53f8bd8d891d279b"
	I0311 13:16:02.247724 1186705 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:16:02.247752 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:16:02.489906 1186705 logs.go:123] Gathering logs for kube-apiserver [b7782589e99f72ebba4d6f9a59a587dc8172acb9cdcfec29c1441eba17134174] ...
	I0311 13:16:02.489950 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7782589e99f72ebba4d6f9a59a587dc8172acb9cdcfec29c1441eba17134174"
	I0311 13:16:02.530464 1186705 logs.go:123] Gathering logs for etcd [668d1483b1904f49ec8d03a57a60a7d417df9d0dd0ff786a03f331824d63bc46] ...
	I0311 13:16:02.530492 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 668d1483b1904f49ec8d03a57a60a7d417df9d0dd0ff786a03f331824d63bc46"
	I0311 13:16:02.579831 1186705 logs.go:123] Gathering logs for kube-scheduler [3b9561677296bffdeb8d086e72eb59d7747e0f37210acc3ede746931c3a0cfdb] ...
	I0311 13:16:02.579863 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b9561677296bffdeb8d086e72eb59d7747e0f37210acc3ede746931c3a0cfdb"
	I0311 13:16:02.615610 1186705 logs.go:123] Gathering logs for kube-controller-manager [5c812b97058fef84bdde809aeb45c326c36e049b4889539d63e1f9feaba9c2f5] ...
	I0311 13:16:02.615641 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c812b97058fef84bdde809aeb45c326c36e049b4889539d63e1f9feaba9c2f5"
	I0311 13:16:02.699163 1186705 logs.go:123] Gathering logs for kube-controller-manager [d6e4e2ef94890ae9a3184dd94a784a6d7194de85134f60dff0a7b5bdef62294d] ...
	I0311 13:16:02.699195 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6e4e2ef94890ae9a3184dd94a784a6d7194de85134f60dff0a7b5bdef62294d"
	I0311 13:16:02.751274 1186705 logs.go:123] Gathering logs for CRI-O ...
	I0311 13:16:02.751346 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 13:16:02.819115 1186705 logs.go:123] Gathering logs for container status ...
	I0311 13:16:02.819150 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:16:05.369364 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0311 13:16:05.369444 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:05.369462 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:05.369524 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:05.378801 1186705 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0311 13:16:05.388825 1186705 system_pods.go:59] 26 kube-system pods found
	I0311 13:16:05.388869 1186705 system_pods.go:61] "coredns-5dd5756b68-2qpt7" [c4ee000a-aee0-406e-92c2-8607a43086f3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 13:16:05.388882 1186705 system_pods.go:61] "coredns-5dd5756b68-mqfn8" [e180beed-aab3-40be-9ab0-3306eeccaa63] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 13:16:05.388890 1186705 system_pods.go:61] "etcd-ha-992796" [d7121ae1-f2d2-490c-955e-89f821c5b548] Running
	I0311 13:16:05.388894 1186705 system_pods.go:61] "etcd-ha-992796-m02" [c7616e2f-2a6d-49cb-be1f-f58cb04c9008] Running
	I0311 13:16:05.388899 1186705 system_pods.go:61] "etcd-ha-992796-m03" [72654c69-4b8d-4839-82a3-38619897e331] Running
	I0311 13:16:05.388903 1186705 system_pods.go:61] "kindnet-64dzs" [9d945976-e318-452f-8912-b638c6486f36] Running
	I0311 13:16:05.388910 1186705 system_pods.go:61] "kindnet-pgt46" [21fab67c-2bb7-4df0-a739-07d78c80ce07] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0311 13:16:05.388914 1186705 system_pods.go:61] "kindnet-rxqfx" [172d1000-c9fd-44b3-a413-88e73e61df87] Running
	I0311 13:16:05.388918 1186705 system_pods.go:61] "kindnet-vbhdw" [3e7ae3ff-255c-432f-bf63-9d770a6c90c0] Running
	I0311 13:16:05.388924 1186705 system_pods.go:61] "kube-apiserver-ha-992796" [fa82046f-441f-408f-823d-d6649eb5d0de] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0311 13:16:05.388932 1186705 system_pods.go:61] "kube-apiserver-ha-992796-m02" [dd084d19-d068-43ee-9009-edf98aa8b6a6] Running
	I0311 13:16:05.388937 1186705 system_pods.go:61] "kube-apiserver-ha-992796-m03" [cee2f30d-1b3a-4f96-973f-412159da3432] Running
	I0311 13:16:05.388948 1186705 system_pods.go:61] "kube-controller-manager-ha-992796" [2a5f7fab-d969-4d67-9eb8-f5649711570d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0311 13:16:05.388957 1186705 system_pods.go:61] "kube-controller-manager-ha-992796-m02" [9623e5f3-1f55-49bc-88e6-bc3cc9709336] Running
	I0311 13:16:05.388962 1186705 system_pods.go:61] "kube-controller-manager-ha-992796-m03" [c5eec009-560f-4e9a-9f5e-de0245076995] Running
	I0311 13:16:05.388966 1186705 system_pods.go:61] "kube-proxy-2p8p9" [ab1c9a1b-c18f-4099-99df-3c04c1063a50] Running
	I0311 13:16:05.388971 1186705 system_pods.go:61] "kube-proxy-5rxbt" [3e659843-2526-467e-8df8-51917c76443c] Running
	I0311 13:16:05.388979 1186705 system_pods.go:61] "kube-proxy-6dzwv" [3521a63a-6ecf-4851-bcfd-61c73c3ae13b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 13:16:05.388988 1186705 system_pods.go:61] "kube-proxy-pb9kg" [6d8e7e26-ed2b-4c7d-b9d0-16d23388e008] Running
	I0311 13:16:05.388992 1186705 system_pods.go:61] "kube-scheduler-ha-992796" [8eddaf44-2cfa-49a8-8b0d-d0c855ef6584] Running
	I0311 13:16:05.388995 1186705 system_pods.go:61] "kube-scheduler-ha-992796-m02" [d6745dfb-c459-41b3-9c2d-b463ab8834e8] Running
	I0311 13:16:05.388999 1186705 system_pods.go:61] "kube-scheduler-ha-992796-m03" [9d327729-f5dd-460a-9e9e-030257cd2dd1] Running
	I0311 13:16:05.389002 1186705 system_pods.go:61] "kube-vip-ha-992796" [0f74dac9-64fa-49d8-9cb1-309a606b4f7e] Running
	I0311 13:16:05.389011 1186705 system_pods.go:61] "kube-vip-ha-992796-m02" [f09aeb0a-be52-4296-9356-5cfe24a53ca8] Running
	I0311 13:16:05.389015 1186705 system_pods.go:61] "kube-vip-ha-992796-m03" [cee2840d-32ff-4fc5-a7f2-ca396a3a8403] Running
	I0311 13:16:05.389019 1186705 system_pods.go:61] "storage-provisioner" [21a57c69-04e0-4b9a-ae5b-a275894688b0] Running
	I0311 13:16:05.389032 1186705 system_pods.go:74] duration metric: took 3.78830536s to wait for pod list to return data ...
	I0311 13:16:05.389040 1186705 default_sa.go:34] waiting for default service account to be created ...
	I0311 13:16:05.389132 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0311 13:16:05.389143 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:05.389151 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:05.389156 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:05.392151 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:16:05.392419 1186705 default_sa.go:45] found service account: "default"
	I0311 13:16:05.392439 1186705 default_sa.go:55] duration metric: took 3.385092ms for default service account to be created ...
	I0311 13:16:05.392450 1186705 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 13:16:05.392512 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0311 13:16:05.392522 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:05.392531 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:05.392534 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:05.399482 1186705 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0311 13:16:05.412982 1186705 system_pods.go:86] 26 kube-system pods found
	I0311 13:16:05.413066 1186705 system_pods.go:89] "coredns-5dd5756b68-2qpt7" [c4ee000a-aee0-406e-92c2-8607a43086f3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 13:16:05.413085 1186705 system_pods.go:89] "coredns-5dd5756b68-mqfn8" [e180beed-aab3-40be-9ab0-3306eeccaa63] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 13:16:05.413094 1186705 system_pods.go:89] "etcd-ha-992796" [d7121ae1-f2d2-490c-955e-89f821c5b548] Running
	I0311 13:16:05.413100 1186705 system_pods.go:89] "etcd-ha-992796-m02" [c7616e2f-2a6d-49cb-be1f-f58cb04c9008] Running
	I0311 13:16:05.413105 1186705 system_pods.go:89] "etcd-ha-992796-m03" [72654c69-4b8d-4839-82a3-38619897e331] Running
	I0311 13:16:05.413109 1186705 system_pods.go:89] "kindnet-64dzs" [9d945976-e318-452f-8912-b638c6486f36] Running
	I0311 13:16:05.413124 1186705 system_pods.go:89] "kindnet-pgt46" [21fab67c-2bb7-4df0-a739-07d78c80ce07] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0311 13:16:05.413130 1186705 system_pods.go:89] "kindnet-rxqfx" [172d1000-c9fd-44b3-a413-88e73e61df87] Running
	I0311 13:16:05.413136 1186705 system_pods.go:89] "kindnet-vbhdw" [3e7ae3ff-255c-432f-bf63-9d770a6c90c0] Running
	I0311 13:16:05.413143 1186705 system_pods.go:89] "kube-apiserver-ha-992796" [fa82046f-441f-408f-823d-d6649eb5d0de] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0311 13:16:05.413151 1186705 system_pods.go:89] "kube-apiserver-ha-992796-m02" [dd084d19-d068-43ee-9009-edf98aa8b6a6] Running
	I0311 13:16:05.413159 1186705 system_pods.go:89] "kube-apiserver-ha-992796-m03" [cee2f30d-1b3a-4f96-973f-412159da3432] Running
	I0311 13:16:05.413167 1186705 system_pods.go:89] "kube-controller-manager-ha-992796" [2a5f7fab-d969-4d67-9eb8-f5649711570d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0311 13:16:05.413176 1186705 system_pods.go:89] "kube-controller-manager-ha-992796-m02" [9623e5f3-1f55-49bc-88e6-bc3cc9709336] Running
	I0311 13:16:05.413182 1186705 system_pods.go:89] "kube-controller-manager-ha-992796-m03" [c5eec009-560f-4e9a-9f5e-de0245076995] Running
	I0311 13:16:05.413186 1186705 system_pods.go:89] "kube-proxy-2p8p9" [ab1c9a1b-c18f-4099-99df-3c04c1063a50] Running
	I0311 13:16:05.413192 1186705 system_pods.go:89] "kube-proxy-5rxbt" [3e659843-2526-467e-8df8-51917c76443c] Running
	I0311 13:16:05.413198 1186705 system_pods.go:89] "kube-proxy-6dzwv" [3521a63a-6ecf-4851-bcfd-61c73c3ae13b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 13:16:05.413202 1186705 system_pods.go:89] "kube-proxy-pb9kg" [6d8e7e26-ed2b-4c7d-b9d0-16d23388e008] Running
	I0311 13:16:05.413210 1186705 system_pods.go:89] "kube-scheduler-ha-992796" [8eddaf44-2cfa-49a8-8b0d-d0c855ef6584] Running
	I0311 13:16:05.413214 1186705 system_pods.go:89] "kube-scheduler-ha-992796-m02" [d6745dfb-c459-41b3-9c2d-b463ab8834e8] Running
	I0311 13:16:05.413219 1186705 system_pods.go:89] "kube-scheduler-ha-992796-m03" [9d327729-f5dd-460a-9e9e-030257cd2dd1] Running
	I0311 13:16:05.413223 1186705 system_pods.go:89] "kube-vip-ha-992796" [0f74dac9-64fa-49d8-9cb1-309a606b4f7e] Running
	I0311 13:16:05.413232 1186705 system_pods.go:89] "kube-vip-ha-992796-m02" [f09aeb0a-be52-4296-9356-5cfe24a53ca8] Running
	I0311 13:16:05.413236 1186705 system_pods.go:89] "kube-vip-ha-992796-m03" [cee2840d-32ff-4fc5-a7f2-ca396a3a8403] Running
	I0311 13:16:05.413241 1186705 system_pods.go:89] "storage-provisioner" [21a57c69-04e0-4b9a-ae5b-a275894688b0] Running
	I0311 13:16:05.413249 1186705 system_pods.go:126] duration metric: took 20.788543ms to wait for k8s-apps to be running ...
	I0311 13:16:05.413261 1186705 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 13:16:05.413320 1186705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 13:16:05.429126 1186705 system_svc.go:56] duration metric: took 15.843934ms WaitForService to wait for kubelet
	I0311 13:16:05.429155 1186705 kubeadm.go:576] duration metric: took 1m12.223205749s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 13:16:05.429178 1186705 node_conditions.go:102] verifying NodePressure condition ...
	I0311 13:16:05.429258 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0311 13:16:05.429276 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:05.429286 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:05.429291 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:05.433061 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:05.435193 1186705 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0311 13:16:05.435228 1186705 node_conditions.go:123] node cpu capacity is 2
	I0311 13:16:05.435247 1186705 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0311 13:16:05.435255 1186705 node_conditions.go:123] node cpu capacity is 2
	I0311 13:16:05.435260 1186705 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0311 13:16:05.435265 1186705 node_conditions.go:123] node cpu capacity is 2
	I0311 13:16:05.435269 1186705 node_conditions.go:105] duration metric: took 6.086544ms to run NodePressure ...
	I0311 13:16:05.435284 1186705 start.go:240] waiting for startup goroutines ...
	I0311 13:16:05.435312 1186705 start.go:254] writing updated cluster config ...
	I0311 13:16:05.437904 1186705 out.go:177] 
	I0311 13:16:05.440213 1186705 config.go:182] Loaded profile config "ha-992796": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 13:16:05.440347 1186705 profile.go:142] Saving config to /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/config.json ...
	I0311 13:16:05.442602 1186705 out.go:177] * Starting "ha-992796-m04" worker node in "ha-992796" cluster
	I0311 13:16:05.446820 1186705 cache.go:121] Beginning downloading kic base image for docker with crio
	I0311 13:16:05.450299 1186705 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0311 13:16:05.453934 1186705 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 13:16:05.453975 1186705 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0311 13:16:05.453983 1186705 cache.go:56] Caching tarball of preloaded images
	I0311 13:16:05.454187 1186705 preload.go:173] Found /home/jenkins/minikube-integration/18350-1124504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0311 13:16:05.454197 1186705 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0311 13:16:05.454370 1186705 profile.go:142] Saving config to /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/config.json ...
	I0311 13:16:05.471429 1186705 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0311 13:16:05.471454 1186705 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0311 13:16:05.471477 1186705 cache.go:194] Successfully downloaded all kic artifacts
	I0311 13:16:05.471506 1186705 start.go:360] acquireMachinesLock for ha-992796-m04: {Name:mk8ea2cc74b0338a75a04a9eb836999b37f83554 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 13:16:05.471568 1186705 start.go:364] duration metric: took 45.611µs to acquireMachinesLock for "ha-992796-m04"
	I0311 13:16:05.471589 1186705 start.go:96] Skipping create...Using existing machine configuration
	I0311 13:16:05.471594 1186705 fix.go:54] fixHost starting: m04
	I0311 13:16:05.471890 1186705 cli_runner.go:164] Run: docker container inspect ha-992796-m04 --format={{.State.Status}}
	I0311 13:16:05.487460 1186705 fix.go:112] recreateIfNeeded on ha-992796-m04: state=Stopped err=<nil>
	W0311 13:16:05.487485 1186705 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 13:16:05.490091 1186705 out.go:177] * Restarting existing docker container for "ha-992796-m04" ...
	I0311 13:16:05.492875 1186705 cli_runner.go:164] Run: docker start ha-992796-m04
	I0311 13:16:05.856467 1186705 cli_runner.go:164] Run: docker container inspect ha-992796-m04 --format={{.State.Status}}
	I0311 13:16:05.884340 1186705 kic.go:430] container "ha-992796-m04" state is running.
	I0311 13:16:05.884955 1186705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-992796-m04
	I0311 13:16:05.912652 1186705 profile.go:142] Saving config to /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/config.json ...
	I0311 13:16:05.912906 1186705 machine.go:94] provisionDockerMachine start ...
	I0311 13:16:05.912979 1186705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796-m04
	I0311 13:16:05.948781 1186705 main.go:141] libmachine: Using SSH client type: native
	I0311 13:16:05.949028 1186705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 34002 <nil> <nil>}
	I0311 13:16:05.949052 1186705 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 13:16:05.949735 1186705 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0311 13:16:09.081493 1186705 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-992796-m04
	
	I0311 13:16:09.081568 1186705 ubuntu.go:169] provisioning hostname "ha-992796-m04"
	I0311 13:16:09.081697 1186705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796-m04
	I0311 13:16:09.105104 1186705 main.go:141] libmachine: Using SSH client type: native
	I0311 13:16:09.105435 1186705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 34002 <nil> <nil>}
	I0311 13:16:09.105451 1186705 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-992796-m04 && echo "ha-992796-m04" | sudo tee /etc/hostname
	I0311 13:16:09.258013 1186705 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-992796-m04
	
	I0311 13:16:09.258154 1186705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796-m04
	I0311 13:16:09.279418 1186705 main.go:141] libmachine: Using SSH client type: native
	I0311 13:16:09.279655 1186705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 34002 <nil> <nil>}
	I0311 13:16:09.279671 1186705 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-992796-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-992796-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-992796-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 13:16:09.413691 1186705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 13:16:09.413722 1186705 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18350-1124504/.minikube CaCertPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18350-1124504/.minikube}
	I0311 13:16:09.413739 1186705 ubuntu.go:177] setting up certificates
	I0311 13:16:09.413749 1186705 provision.go:84] configureAuth start
	I0311 13:16:09.413810 1186705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-992796-m04
	I0311 13:16:09.433258 1186705 provision.go:143] copyHostCerts
	I0311 13:16:09.433303 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.pem
	I0311 13:16:09.433335 1186705 exec_runner.go:144] found /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.pem, removing ...
	I0311 13:16:09.433497 1186705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.pem
	I0311 13:16:09.433577 1186705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.pem (1078 bytes)
	I0311 13:16:09.433707 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18350-1124504/.minikube/cert.pem
	I0311 13:16:09.433736 1186705 exec_runner.go:144] found /home/jenkins/minikube-integration/18350-1124504/.minikube/cert.pem, removing ...
	I0311 13:16:09.433745 1186705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18350-1124504/.minikube/cert.pem
	I0311 13:16:09.433780 1186705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18350-1124504/.minikube/cert.pem (1123 bytes)
	I0311 13:16:09.433833 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18350-1124504/.minikube/key.pem
	I0311 13:16:09.433854 1186705 exec_runner.go:144] found /home/jenkins/minikube-integration/18350-1124504/.minikube/key.pem, removing ...
	I0311 13:16:09.433861 1186705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18350-1124504/.minikube/key.pem
	I0311 13:16:09.433893 1186705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18350-1124504/.minikube/key.pem (1675 bytes)
	I0311 13:16:09.433942 1186705 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca-key.pem org=jenkins.ha-992796-m04 san=[127.0.0.1 192.168.49.5 ha-992796-m04 localhost minikube]
	I0311 13:16:09.799399 1186705 provision.go:177] copyRemoteCerts
	I0311 13:16:09.799466 1186705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 13:16:09.799520 1186705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796-m04
	I0311 13:16:09.818622 1186705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34002 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/ha-992796-m04/id_rsa Username:docker}
	I0311 13:16:09.915379 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0311 13:16:09.915445 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0311 13:16:09.955158 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0311 13:16:09.955226 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0311 13:16:09.991597 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0311 13:16:09.991663 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 13:16:10.072151 1186705 provision.go:87] duration metric: took 658.385651ms to configureAuth
	I0311 13:16:10.072179 1186705 ubuntu.go:193] setting minikube options for container-runtime
	I0311 13:16:10.072448 1186705 config.go:182] Loaded profile config "ha-992796": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 13:16:10.072562 1186705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796-m04
	I0311 13:16:10.097014 1186705 main.go:141] libmachine: Using SSH client type: native
	I0311 13:16:10.097281 1186705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 34002 <nil> <nil>}
	I0311 13:16:10.097310 1186705 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 13:16:10.413199 1186705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 13:16:10.413273 1186705 machine.go:97] duration metric: took 4.500339265s to provisionDockerMachine
	I0311 13:16:10.413303 1186705 start.go:293] postStartSetup for "ha-992796-m04" (driver="docker")
	I0311 13:16:10.413436 1186705 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 13:16:10.413529 1186705 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 13:16:10.413603 1186705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796-m04
	I0311 13:16:10.432501 1186705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34002 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/ha-992796-m04/id_rsa Username:docker}
	I0311 13:16:10.531218 1186705 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 13:16:10.534883 1186705 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0311 13:16:10.534924 1186705 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0311 13:16:10.534936 1186705 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0311 13:16:10.534943 1186705 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0311 13:16:10.534954 1186705 filesync.go:126] Scanning /home/jenkins/minikube-integration/18350-1124504/.minikube/addons for local assets ...
	I0311 13:16:10.535021 1186705 filesync.go:126] Scanning /home/jenkins/minikube-integration/18350-1124504/.minikube/files for local assets ...
	I0311 13:16:10.535110 1186705 filesync.go:149] local asset: /home/jenkins/minikube-integration/18350-1124504/.minikube/files/etc/ssl/certs/11299062.pem -> 11299062.pem in /etc/ssl/certs
	I0311 13:16:10.535120 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/files/etc/ssl/certs/11299062.pem -> /etc/ssl/certs/11299062.pem
	I0311 13:16:10.535222 1186705 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 13:16:10.543796 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/files/etc/ssl/certs/11299062.pem --> /etc/ssl/certs/11299062.pem (1708 bytes)
	I0311 13:16:10.571867 1186705 start.go:296] duration metric: took 158.534609ms for postStartSetup
	I0311 13:16:10.572008 1186705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 13:16:10.572083 1186705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796-m04
	I0311 13:16:10.588448 1186705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34002 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/ha-992796-m04/id_rsa Username:docker}
	I0311 13:16:10.682500 1186705 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0311 13:16:10.687034 1186705 fix.go:56] duration metric: took 5.215432911s for fixHost
	I0311 13:16:10.687061 1186705 start.go:83] releasing machines lock for "ha-992796-m04", held for 5.215484011s
	I0311 13:16:10.687131 1186705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-992796-m04
	I0311 13:16:10.707918 1186705 out.go:177] * Found network options:
	I0311 13:16:10.709756 1186705 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0311 13:16:10.712028 1186705 proxy.go:119] fail to check proxy env: Error ip not in block
	W0311 13:16:10.712061 1186705 proxy.go:119] fail to check proxy env: Error ip not in block
	W0311 13:16:10.712088 1186705 proxy.go:119] fail to check proxy env: Error ip not in block
	W0311 13:16:10.712101 1186705 proxy.go:119] fail to check proxy env: Error ip not in block
	I0311 13:16:10.712176 1186705 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 13:16:10.712221 1186705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796-m04
	I0311 13:16:10.712517 1186705 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 13:16:10.712574 1186705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796-m04
	I0311 13:16:10.735461 1186705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34002 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/ha-992796-m04/id_rsa Username:docker}
	I0311 13:16:10.746997 1186705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34002 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/ha-992796-m04/id_rsa Username:docker}
	I0311 13:16:11.028138 1186705 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0311 13:16:11.032444 1186705 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 13:16:11.043902 1186705 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0311 13:16:11.044038 1186705 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 13:16:11.054176 1186705 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0311 13:16:11.054202 1186705 start.go:494] detecting cgroup driver to use...
	I0311 13:16:11.054236 1186705 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0311 13:16:11.054304 1186705 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 13:16:11.073240 1186705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 13:16:11.085509 1186705 docker.go:217] disabling cri-docker service (if available) ...
	I0311 13:16:11.085652 1186705 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 13:16:11.100903 1186705 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 13:16:11.115212 1186705 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 13:16:11.220954 1186705 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 13:16:11.314613 1186705 docker.go:233] disabling docker service ...
	I0311 13:16:11.314698 1186705 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 13:16:11.328286 1186705 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 13:16:11.340111 1186705 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 13:16:11.445009 1186705 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 13:16:11.567489 1186705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 13:16:11.580878 1186705 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 13:16:11.598636 1186705 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 13:16:11.598702 1186705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 13:16:11.608814 1186705 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 13:16:11.608923 1186705 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 13:16:11.619655 1186705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 13:16:11.630570 1186705 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 13:16:11.641549 1186705 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 13:16:11.651300 1186705 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 13:16:11.661649 1186705 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 13:16:11.672769 1186705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:16:11.772319 1186705 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 13:16:11.922440 1186705 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 13:16:11.922571 1186705 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 13:16:11.927773 1186705 start.go:562] Will wait 60s for crictl version
	I0311 13:16:11.927884 1186705 ssh_runner.go:195] Run: which crictl
	I0311 13:16:11.932880 1186705 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 13:16:11.981062 1186705 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0311 13:16:11.981212 1186705 ssh_runner.go:195] Run: crio --version
	I0311 13:16:12.037163 1186705 ssh_runner.go:195] Run: crio --version
	I0311 13:16:12.102885 1186705 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0311 13:16:12.104837 1186705 out.go:177]   - env NO_PROXY=192.168.49.2
	I0311 13:16:12.106995 1186705 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0311 13:16:12.109429 1186705 cli_runner.go:164] Run: docker network inspect ha-992796 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0311 13:16:12.127223 1186705 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0311 13:16:12.133116 1186705 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 13:16:12.148494 1186705 mustload.go:65] Loading cluster: ha-992796
	I0311 13:16:12.148740 1186705 config.go:182] Loaded profile config "ha-992796": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 13:16:12.148996 1186705 cli_runner.go:164] Run: docker container inspect ha-992796 --format={{.State.Status}}
	I0311 13:16:12.168080 1186705 host.go:66] Checking if "ha-992796" exists ...
	I0311 13:16:12.168416 1186705 certs.go:68] Setting up /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796 for IP: 192.168.49.5
	I0311 13:16:12.168437 1186705 certs.go:194] generating shared ca certs ...
	I0311 13:16:12.168456 1186705 certs.go:226] acquiring lock for ca certs: {Name:mk30659f158a045ae3a6809b62fbd61891660c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:16:12.168589 1186705 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.key
	I0311 13:16:12.168649 1186705 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/proxy-client-ca.key
	I0311 13:16:12.168664 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0311 13:16:12.168677 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0311 13:16:12.168688 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0311 13:16:12.168703 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0311 13:16:12.168767 1186705 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/1129906.pem (1338 bytes)
	W0311 13:16:12.168826 1186705 certs.go:480] ignoring /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/1129906_empty.pem, impossibly tiny 0 bytes
	I0311 13:16:12.168836 1186705 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 13:16:12.168875 1186705 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/ca.pem (1078 bytes)
	I0311 13:16:12.168912 1186705 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/cert.pem (1123 bytes)
	I0311 13:16:12.168950 1186705 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/key.pem (1675 bytes)
	I0311 13:16:12.168998 1186705 certs.go:484] found cert: /home/jenkins/minikube-integration/18350-1124504/.minikube/files/etc/ssl/certs/11299062.pem (1708 bytes)
	I0311 13:16:12.169043 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/1129906.pem -> /usr/share/ca-certificates/1129906.pem
	I0311 13:16:12.169063 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/files/etc/ssl/certs/11299062.pem -> /usr/share/ca-certificates/11299062.pem
	I0311 13:16:12.169079 1186705 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0311 13:16:12.169103 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 13:16:12.199757 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 13:16:12.229022 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 13:16:12.255963 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 13:16:12.282622 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/certs/1129906.pem --> /usr/share/ca-certificates/1129906.pem (1338 bytes)
	I0311 13:16:12.313400 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/files/etc/ssl/certs/11299062.pem --> /usr/share/ca-certificates/11299062.pem (1708 bytes)
	I0311 13:16:12.339126 1186705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 13:16:12.364784 1186705 ssh_runner.go:195] Run: openssl version
	I0311 13:16:12.370458 1186705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1129906.pem && ln -fs /usr/share/ca-certificates/1129906.pem /etc/ssl/certs/1129906.pem"
	I0311 13:16:12.380283 1186705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1129906.pem
	I0311 13:16:12.384110 1186705 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 13:02 /usr/share/ca-certificates/1129906.pem
	I0311 13:16:12.384212 1186705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1129906.pem
	I0311 13:16:12.391326 1186705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1129906.pem /etc/ssl/certs/51391683.0"
	I0311 13:16:12.400666 1186705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11299062.pem && ln -fs /usr/share/ca-certificates/11299062.pem /etc/ssl/certs/11299062.pem"
	I0311 13:16:12.411992 1186705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11299062.pem
	I0311 13:16:12.416913 1186705 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 13:02 /usr/share/ca-certificates/11299062.pem
	I0311 13:16:12.417026 1186705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11299062.pem
	I0311 13:16:12.424144 1186705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11299062.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 13:16:12.433292 1186705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 13:16:12.442773 1186705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 13:16:12.446406 1186705 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 12:54 /usr/share/ca-certificates/minikubeCA.pem
	I0311 13:16:12.446468 1186705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 13:16:12.453447 1186705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 13:16:12.462620 1186705 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 13:16:12.466174 1186705 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0311 13:16:12.466221 1186705 kubeadm.go:928] updating node {m04 192.168.49.5 0 v1.28.4  false true} ...
	I0311 13:16:12.466304 1186705 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-992796-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-992796 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 13:16:12.466371 1186705 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 13:16:12.474977 1186705 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 13:16:12.475046 1186705 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0311 13:16:12.483756 1186705 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0311 13:16:12.502524 1186705 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 13:16:12.525559 1186705 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0311 13:16:12.529328 1186705 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 13:16:12.540341 1186705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:16:12.677634 1186705 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 13:16:12.695916 1186705 start.go:234] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0311 13:16:12.696484 1186705 config.go:182] Loaded profile config "ha-992796": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 13:16:12.699492 1186705 out.go:177] * Verifying Kubernetes components...
	I0311 13:16:12.701872 1186705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:16:12.826216 1186705 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 13:16:12.847266 1186705 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18350-1124504/kubeconfig
	I0311 13:16:12.847617 1186705 kapi.go:59] client config for ha-992796: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/client.crt", KeyFile:"/home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/ha-992796/client.key", CAFile:"/home/jenkins/minikube-integration/18350-1124504/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16fd7a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0311 13:16:12.847705 1186705 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0311 13:16:12.847967 1186705 node_ready.go:35] waiting up to 6m0s for node "ha-992796-m04" to be "Ready" ...
	I0311 13:16:12.848076 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m04
	I0311 13:16:12.848102 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:12.848127 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:12.848162 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:12.852124 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:13.349037 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m04
	I0311 13:16:13.349060 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:13.349069 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:13.349074 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:13.352108 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:13.848768 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m04
	I0311 13:16:13.848794 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:13.848803 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:13.848807 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:13.851890 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:14.348600 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m04
	I0311 13:16:14.348622 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:14.348632 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:14.348638 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:14.351566 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:16:14.848964 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m04
	I0311 13:16:14.848993 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:14.849001 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:14.849006 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:14.852120 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:14.852701 1186705 node_ready.go:53] node "ha-992796-m04" has status "Ready":"Unknown"
	I0311 13:16:15.348332 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m04
	I0311 13:16:15.348359 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:15.348368 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:15.348372 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:15.351604 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:15.848866 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m04
	I0311 13:16:15.848891 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:15.848901 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:15.848904 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:15.852021 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:16.348922 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m04
	I0311 13:16:16.348953 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:16.348963 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:16.348966 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:16.351949 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:16:16.848383 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m04
	I0311 13:16:16.848408 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:16.848418 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:16.848423 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:16.860637 1186705 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0311 13:16:16.861718 1186705 node_ready.go:53] node "ha-992796-m04" has status "Ready":"Unknown"
	I0311 13:16:17.348219 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m04
	I0311 13:16:17.348239 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:17.348257 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:17.348261 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:17.351813 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:17.848226 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m04
	I0311 13:16:17.848247 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:17.848257 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:17.848261 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:17.851650 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:18.348210 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m04
	I0311 13:16:18.348228 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:18.348237 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:18.348241 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:18.352001 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:18.848256 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m04
	I0311 13:16:18.848277 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:18.848286 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:18.848291 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:18.852681 1186705 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 13:16:19.348178 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m04
	I0311 13:16:19.348203 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:19.348213 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:19.348217 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:19.351914 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:19.352941 1186705 node_ready.go:53] node "ha-992796-m04" has status "Ready":"Unknown"
	I0311 13:16:19.848252 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m04
	I0311 13:16:19.848274 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:19.848284 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:19.848289 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:19.851361 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:19.852060 1186705 node_ready.go:49] node "ha-992796-m04" has status "Ready":"True"
	I0311 13:16:19.852081 1186705 node_ready.go:38] duration metric: took 7.004074051s for node "ha-992796-m04" to be "Ready" ...
	I0311 13:16:19.852092 1186705 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 13:16:19.852157 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0311 13:16:19.852169 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:19.852177 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:19.852181 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:19.859607 1186705 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0311 13:16:19.867462 1186705 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2qpt7" in "kube-system" namespace to be "Ready" ...
	I0311 13:16:19.867593 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qpt7
	I0311 13:16:19.867605 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:19.867615 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:19.867627 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:19.873759 1186705 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0311 13:16:19.874455 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:19.874473 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:19.874483 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:19.874488 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:19.877228 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:16:20.368542 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qpt7
	I0311 13:16:20.368563 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:20.368573 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:20.368578 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:20.371543 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:16:20.372172 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:20.372182 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:20.372190 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:20.372194 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:20.374819 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:16:20.867793 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qpt7
	I0311 13:16:20.867812 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:20.867821 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:20.867825 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:20.871005 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:20.871725 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:20.871743 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:20.871752 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:20.871756 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:20.874622 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:16:21.368490 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qpt7
	I0311 13:16:21.368512 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:21.368521 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:21.368527 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:21.371821 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:21.372623 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:21.372644 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:21.372654 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:21.372660 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:21.375583 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:16:21.867911 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qpt7
	I0311 13:16:21.867933 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:21.867944 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:21.867948 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:21.871103 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:21.871920 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:21.871939 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:21.871949 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:21.871954 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:21.874858 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:16:21.875469 1186705 pod_ready.go:102] pod "coredns-5dd5756b68-2qpt7" in "kube-system" namespace has status "Ready":"False"
	I0311 13:16:22.368155 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qpt7
	I0311 13:16:22.368178 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:22.368188 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:22.368193 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:22.371240 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:22.372099 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:22.372116 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:22.372127 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:22.372131 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:22.374940 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:16:22.868420 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qpt7
	I0311 13:16:22.868447 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:22.868457 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:22.868462 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:22.871468 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:16:22.872219 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:22.872236 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:22.872245 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:22.872251 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:22.875159 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:16:23.367834 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qpt7
	I0311 13:16:23.368298 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:23.368315 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:23.368320 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:23.371544 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:23.372203 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:23.372220 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:23.372230 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:23.372234 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:23.375292 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:23.868379 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qpt7
	I0311 13:16:23.868399 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:23.868408 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:23.868414 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:23.871420 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:16:23.872157 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:23.872201 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:23.872216 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:23.872222 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:23.875105 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:16:23.875640 1186705 pod_ready.go:102] pod "coredns-5dd5756b68-2qpt7" in "kube-system" namespace has status "Ready":"False"
	I0311 13:16:24.368388 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qpt7
	I0311 13:16:24.368413 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:24.368424 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:24.368428 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:24.371629 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:24.372339 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:24.372357 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:24.372366 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:24.372370 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:24.375153 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:16:24.868131 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qpt7
	I0311 13:16:24.868154 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:24.868166 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:24.868172 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:24.871347 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:24.872122 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:24.872136 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:24.872145 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:24.872151 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:24.874911 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:16:25.367707 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qpt7
	I0311 13:16:25.367730 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:25.367741 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:25.367745 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:25.370903 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:25.371552 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:25.371570 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:25.371580 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:25.371585 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:25.374605 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:25.867704 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qpt7
	I0311 13:16:25.867724 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:25.867733 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:25.867736 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:25.870883 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:25.871772 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:25.871796 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:25.871805 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:25.871809 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:25.874787 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:16:25.875889 1186705 pod_ready.go:102] pod "coredns-5dd5756b68-2qpt7" in "kube-system" namespace has status "Ready":"False"
	I0311 13:16:26.368532 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qpt7
	I0311 13:16:26.368562 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:26.368572 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:26.368578 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:26.371709 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:26.372413 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:26.372430 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:26.372440 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:26.372445 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:26.375402 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:16:26.868414 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qpt7
	I0311 13:16:26.868437 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:26.868446 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:26.868450 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:26.871527 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:26.872481 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:26.872500 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:26.872509 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:26.872514 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:26.875387 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:16:27.367731 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qpt7
	I0311 13:16:27.367754 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:27.367764 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:27.367769 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:27.371027 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:27.371794 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:27.371811 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:27.371820 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:27.371824 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:27.374656 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:16:27.868648 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qpt7
	I0311 13:16:27.868667 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:27.868676 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:27.868682 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:27.872036 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:27.872943 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:27.872963 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:27.872973 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:27.872976 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:27.876157 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:27.876982 1186705 pod_ready.go:102] pod "coredns-5dd5756b68-2qpt7" in "kube-system" namespace has status "Ready":"False"
	I0311 13:16:28.367986 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qpt7
	I0311 13:16:28.368006 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:28.368016 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:28.368020 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:28.370749 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:16:28.371713 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:28.371735 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:28.371744 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:28.371748 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:28.374269 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:16:28.868124 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qpt7
	I0311 13:16:28.868153 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:28.868163 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:28.868168 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:28.871403 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:28.872228 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:28.872248 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:28.872258 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:28.872262 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:28.875449 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:29.368011 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qpt7
	I0311 13:16:29.368038 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:29.368049 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:29.368053 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:29.371955 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:29.372806 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:29.372822 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:29.372832 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:29.372837 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:29.375741 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:16:29.867955 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qpt7
	I0311 13:16:29.867978 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:29.867988 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:29.867999 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:29.871342 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:29.872332 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:29.872354 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:29.872363 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:29.872368 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:29.875178 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:16:30.368205 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qpt7
	I0311 13:16:30.368233 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:30.368244 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:30.368255 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:30.371889 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:30.372661 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:30.372719 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:30.372742 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:30.372761 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:30.375937 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:30.376527 1186705 pod_ready.go:102] pod "coredns-5dd5756b68-2qpt7" in "kube-system" namespace has status "Ready":"False"
	I0311 13:16:30.867706 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qpt7
	I0311 13:16:30.867776 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:30.867802 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:30.867822 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:30.871179 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:30.871921 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:30.871938 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:30.871947 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:30.871952 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:30.874868 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:16:31.367969 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qpt7
	I0311 13:16:31.367990 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:31.368000 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:31.368004 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:31.371038 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:31.372079 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:31.372130 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:31.372145 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:31.372150 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:31.376561 1186705 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 13:16:31.867819 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qpt7
	I0311 13:16:31.867845 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:31.867856 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:31.867861 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:31.870915 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:31.872054 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:31.872072 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:31.872082 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:31.872086 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:31.875493 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:32.367752 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qpt7
	I0311 13:16:32.367778 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:32.367792 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:32.367796 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:32.370910 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:32.371731 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:32.371751 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:32.371761 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:32.371766 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:32.374704 1186705 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 13:16:32.868027 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qpt7
	I0311 13:16:32.868091 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:32.868116 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:32.868137 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:32.880284 1186705 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0311 13:16:32.883802 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:32.883864 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:32.883890 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:32.883911 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:32.887971 1186705 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 13:16:32.890215 1186705 pod_ready.go:97] node "ha-992796" hosting pod "coredns-5dd5756b68-2qpt7" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-992796" has status "Ready":"Unknown"
	I0311 13:16:32.890282 1186705 pod_ready.go:81] duration metric: took 13.022779939s for pod "coredns-5dd5756b68-2qpt7" in "kube-system" namespace to be "Ready" ...
	E0311 13:16:32.890306 1186705 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-992796" hosting pod "coredns-5dd5756b68-2qpt7" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-992796" has status "Ready":"Unknown"
	I0311 13:16:32.890327 1186705 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mqfn8" in "kube-system" namespace to be "Ready" ...
	I0311 13:16:32.890423 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mqfn8
	I0311 13:16:32.890449 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:32.890471 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:32.890492 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:32.895946 1186705 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0311 13:16:32.896954 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:32.897002 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:32.897025 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:32.897065 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:32.905297 1186705 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0311 13:16:32.906312 1186705 pod_ready.go:97] node "ha-992796" hosting pod "coredns-5dd5756b68-mqfn8" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-992796" has status "Ready":"Unknown"
	I0311 13:16:32.906376 1186705 pod_ready.go:81] duration metric: took 16.014095ms for pod "coredns-5dd5756b68-mqfn8" in "kube-system" namespace to be "Ready" ...
	E0311 13:16:32.906400 1186705 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-992796" hosting pod "coredns-5dd5756b68-mqfn8" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-992796" has status "Ready":"Unknown"
	I0311 13:16:32.906422 1186705 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-992796" in "kube-system" namespace to be "Ready" ...
	I0311 13:16:32.906509 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-992796
	I0311 13:16:32.906537 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:32.906560 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:32.906579 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:32.912375 1186705 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0311 13:16:32.912976 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:32.913020 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:32.913045 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:32.913067 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:32.921020 1186705 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0311 13:16:32.922264 1186705 pod_ready.go:97] node "ha-992796" hosting pod "etcd-ha-992796" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-992796" has status "Ready":"Unknown"
	I0311 13:16:32.922325 1186705 pod_ready.go:81] duration metric: took 15.877491ms for pod "etcd-ha-992796" in "kube-system" namespace to be "Ready" ...
	E0311 13:16:32.922349 1186705 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-992796" hosting pod "etcd-ha-992796" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-992796" has status "Ready":"Unknown"
	I0311 13:16:32.922372 1186705 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-992796-m02" in "kube-system" namespace to be "Ready" ...
	I0311 13:16:32.922476 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-992796-m02
	I0311 13:16:32.922512 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:32.922534 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:32.922555 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:32.931871 1186705 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0311 13:16:32.933086 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m02
	I0311 13:16:32.933141 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:32.933163 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:32.933185 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:32.939347 1186705 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0311 13:16:32.940256 1186705 pod_ready.go:92] pod "etcd-ha-992796-m02" in "kube-system" namespace has status "Ready":"True"
	I0311 13:16:32.940310 1186705 pod_ready.go:81] duration metric: took 17.896359ms for pod "etcd-ha-992796-m02" in "kube-system" namespace to be "Ready" ...
	I0311 13:16:32.940346 1186705 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-992796" in "kube-system" namespace to be "Ready" ...
	I0311 13:16:32.940436 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-992796
	I0311 13:16:32.940460 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:32.940485 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:32.940505 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:32.943966 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:32.944765 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:32.944812 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:32.944834 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:32.944856 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:32.953733 1186705 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0311 13:16:32.954433 1186705 pod_ready.go:97] node "ha-992796" hosting pod "kube-apiserver-ha-992796" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-992796" has status "Ready":"Unknown"
	I0311 13:16:32.954487 1186705 pod_ready.go:81] duration metric: took 14.107018ms for pod "kube-apiserver-ha-992796" in "kube-system" namespace to be "Ready" ...
	E0311 13:16:32.954512 1186705 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-992796" hosting pod "kube-apiserver-ha-992796" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-992796" has status "Ready":"Unknown"
	I0311 13:16:32.954534 1186705 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-992796-m02" in "kube-system" namespace to be "Ready" ...
	I0311 13:16:33.068917 1186705 request.go:629] Waited for 114.261085ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-992796-m02
	I0311 13:16:33.069006 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-992796-m02
	I0311 13:16:33.069019 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:33.069026 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:33.069030 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:33.072584 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:33.268574 1186705 request.go:629] Waited for 195.303145ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-992796-m02
	I0311 13:16:33.268649 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m02
	I0311 13:16:33.268656 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:33.268667 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:33.268672 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:33.272022 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:33.272669 1186705 pod_ready.go:92] pod "kube-apiserver-ha-992796-m02" in "kube-system" namespace has status "Ready":"True"
	I0311 13:16:33.272690 1186705 pod_ready.go:81] duration metric: took 318.118172ms for pod "kube-apiserver-ha-992796-m02" in "kube-system" namespace to be "Ready" ...
	I0311 13:16:33.272703 1186705 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-992796" in "kube-system" namespace to be "Ready" ...
	I0311 13:16:33.468841 1186705 request.go:629] Waited for 196.069853ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-992796
	I0311 13:16:33.468910 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-992796
	I0311 13:16:33.468919 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:33.468928 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:33.468932 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:33.472166 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:33.668088 1186705 request.go:629] Waited for 195.275535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:33.668150 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:33.668162 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:33.668171 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:33.668181 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:33.671418 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:33.672037 1186705 pod_ready.go:97] node "ha-992796" hosting pod "kube-controller-manager-ha-992796" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-992796" has status "Ready":"Unknown"
	I0311 13:16:33.672061 1186705 pod_ready.go:81] duration metric: took 399.350094ms for pod "kube-controller-manager-ha-992796" in "kube-system" namespace to be "Ready" ...
	E0311 13:16:33.672072 1186705 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-992796" hosting pod "kube-controller-manager-ha-992796" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-992796" has status "Ready":"Unknown"
	I0311 13:16:33.672080 1186705 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-992796-m02" in "kube-system" namespace to be "Ready" ...
	I0311 13:16:33.868871 1186705 request.go:629] Waited for 196.70945ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-992796-m02
	I0311 13:16:33.868977 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-992796-m02
	I0311 13:16:33.868989 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:33.868998 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:33.869024 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:33.872433 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:34.068459 1186705 request.go:629] Waited for 195.348486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-992796-m02
	I0311 13:16:34.068533 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m02
	I0311 13:16:34.068539 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:34.068549 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:34.068562 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:34.072045 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:34.072584 1186705 pod_ready.go:92] pod "kube-controller-manager-ha-992796-m02" in "kube-system" namespace has status "Ready":"True"
	I0311 13:16:34.072605 1186705 pod_ready.go:81] duration metric: took 400.513911ms for pod "kube-controller-manager-ha-992796-m02" in "kube-system" namespace to be "Ready" ...
	I0311 13:16:34.072618 1186705 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2p8p9" in "kube-system" namespace to be "Ready" ...
	I0311 13:16:34.269024 1186705 request.go:629] Waited for 196.324525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2p8p9
	I0311 13:16:34.269105 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2p8p9
	I0311 13:16:34.269115 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:34.269125 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:34.269129 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:34.272170 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:34.468103 1186705 request.go:629] Waited for 195.254457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:34.468160 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:34.468166 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:34.468175 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:34.468182 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:34.471406 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:34.472040 1186705 pod_ready.go:97] node "ha-992796" hosting pod "kube-proxy-2p8p9" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-992796" has status "Ready":"Unknown"
	I0311 13:16:34.472063 1186705 pod_ready.go:81] duration metric: took 399.434079ms for pod "kube-proxy-2p8p9" in "kube-system" namespace to be "Ready" ...
	E0311 13:16:34.472074 1186705 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-992796" hosting pod "kube-proxy-2p8p9" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-992796" has status "Ready":"Unknown"
	I0311 13:16:34.472088 1186705 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5rxbt" in "kube-system" namespace to be "Ready" ...
	I0311 13:16:34.668458 1186705 request.go:629] Waited for 196.297162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5rxbt
	I0311 13:16:34.668520 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5rxbt
	I0311 13:16:34.668533 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:34.668547 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:34.668555 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:34.671657 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:34.868574 1186705 request.go:629] Waited for 196.221578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-992796-m04
	I0311 13:16:34.868640 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m04
	I0311 13:16:34.868657 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:34.868666 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:34.868686 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:34.871734 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:34.872708 1186705 pod_ready.go:92] pod "kube-proxy-5rxbt" in "kube-system" namespace has status "Ready":"True"
	I0311 13:16:34.872759 1186705 pod_ready.go:81] duration metric: took 400.662559ms for pod "kube-proxy-5rxbt" in "kube-system" namespace to be "Ready" ...
	I0311 13:16:34.872787 1186705 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6dzwv" in "kube-system" namespace to be "Ready" ...
	I0311 13:16:35.068022 1186705 request.go:629] Waited for 195.130924ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6dzwv
	I0311 13:16:35.068105 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6dzwv
	I0311 13:16:35.068115 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:35.068125 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:35.068131 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:35.071383 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:35.268370 1186705 request.go:629] Waited for 196.351191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-992796-m02
	I0311 13:16:35.268444 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m02
	I0311 13:16:35.268451 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:35.268459 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:35.268463 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:35.271818 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:35.272296 1186705 pod_ready.go:92] pod "kube-proxy-6dzwv" in "kube-system" namespace has status "Ready":"True"
	I0311 13:16:35.272315 1186705 pod_ready.go:81] duration metric: took 399.506791ms for pod "kube-proxy-6dzwv" in "kube-system" namespace to be "Ready" ...
	I0311 13:16:35.272326 1186705 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-992796" in "kube-system" namespace to be "Ready" ...
	I0311 13:16:35.468827 1186705 request.go:629] Waited for 196.409183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-992796
	I0311 13:16:35.468889 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-992796
	I0311 13:16:35.468895 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:35.468903 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:35.468912 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:35.472082 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:35.668956 1186705 request.go:629] Waited for 196.34384ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:35.669034 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796
	I0311 13:16:35.669046 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:35.669055 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:35.669059 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:35.672275 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:35.672953 1186705 pod_ready.go:97] node "ha-992796" hosting pod "kube-scheduler-ha-992796" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-992796" has status "Ready":"Unknown"
	I0311 13:16:35.672977 1186705 pod_ready.go:81] duration metric: took 400.642991ms for pod "kube-scheduler-ha-992796" in "kube-system" namespace to be "Ready" ...
	E0311 13:16:35.672988 1186705 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-992796" hosting pod "kube-scheduler-ha-992796" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-992796" has status "Ready":"Unknown"
	I0311 13:16:35.672995 1186705 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-992796-m02" in "kube-system" namespace to be "Ready" ...
	I0311 13:16:35.868439 1186705 request.go:629] Waited for 195.376505ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-992796-m02
	I0311 13:16:35.868537 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-992796-m02
	I0311 13:16:35.868567 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:35.868583 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:35.868588 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:35.871802 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:36.068753 1186705 request.go:629] Waited for 196.347204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-992796-m02
	I0311 13:16:36.068899 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-992796-m02
	I0311 13:16:36.068937 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:36.068964 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:36.068985 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:36.072698 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:36.073893 1186705 pod_ready.go:92] pod "kube-scheduler-ha-992796-m02" in "kube-system" namespace has status "Ready":"True"
	I0311 13:16:36.073929 1186705 pod_ready.go:81] duration metric: took 400.921884ms for pod "kube-scheduler-ha-992796-m02" in "kube-system" namespace to be "Ready" ...
	I0311 13:16:36.073943 1186705 pod_ready.go:38] duration metric: took 16.221841179s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 13:16:36.073957 1186705 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 13:16:36.074026 1186705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 13:16:36.089937 1186705 system_svc.go:56] duration metric: took 15.970313ms WaitForService to wait for kubelet
	I0311 13:16:36.089965 1186705 kubeadm.go:576] duration metric: took 23.393750731s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 13:16:36.089987 1186705 node_conditions.go:102] verifying NodePressure condition ...
	I0311 13:16:36.268415 1186705 request.go:629] Waited for 178.327566ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0311 13:16:36.268474 1186705 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0311 13:16:36.268484 1186705 round_trippers.go:469] Request Headers:
	I0311 13:16:36.268501 1186705 round_trippers.go:473]     Accept: application/json, */*
	I0311 13:16:36.268507 1186705 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0311 13:16:36.272038 1186705 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 13:16:36.273554 1186705 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0311 13:16:36.273582 1186705 node_conditions.go:123] node cpu capacity is 2
	I0311 13:16:36.273593 1186705 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0311 13:16:36.273598 1186705 node_conditions.go:123] node cpu capacity is 2
	I0311 13:16:36.273602 1186705 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0311 13:16:36.273606 1186705 node_conditions.go:123] node cpu capacity is 2
	I0311 13:16:36.273611 1186705 node_conditions.go:105] duration metric: took 183.601084ms to run NodePressure ...
	I0311 13:16:36.273623 1186705 start.go:240] waiting for startup goroutines ...
	I0311 13:16:36.273661 1186705 start.go:254] writing updated cluster config ...
	I0311 13:16:36.274065 1186705 ssh_runner.go:195] Run: rm -f paused
	I0311 13:16:36.335857 1186705 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0311 13:16:36.339613 1186705 out.go:177] * Done! kubectl is now configured to use "ha-992796" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 11 13:15:59 ha-992796 crio[637]: time="2024-03-11 13:15:59.778801941Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Mar 11 13:15:59 ha-992796 crio[637]: time="2024-03-11 13:15:59.778836467Z" level=info msg="Updated default CNI network name to kindnet"
	Mar 11 13:15:59 ha-992796 crio[637]: time="2024-03-11 13:15:59.778854526Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Mar 11 13:15:59 ha-992796 crio[637]: time="2024-03-11 13:15:59.782431072Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Mar 11 13:15:59 ha-992796 crio[637]: time="2024-03-11 13:15:59.782471432Z" level=info msg="Updated default CNI network name to kindnet"
	Mar 11 13:15:59 ha-992796 crio[637]: time="2024-03-11 13:15:59.957121541Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ab974c27-2d2a-4e0d-9383-1eea74511dd2 name=/runtime.v1.ImageService/ImageStatus
	Mar 11 13:15:59 ha-992796 crio[637]: time="2024-03-11 13:15:59.957330864Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2 gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944],Size_:29037500,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ab974c27-2d2a-4e0d-9383-1eea74511dd2 name=/runtime.v1.ImageService/ImageStatus
	Mar 11 13:15:59 ha-992796 crio[637]: time="2024-03-11 13:15:59.958048286Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8c1f128f-b448-44ac-b4b1-f3228328e68e name=/runtime.v1.ImageService/ImageStatus
	Mar 11 13:15:59 ha-992796 crio[637]: time="2024-03-11 13:15:59.958431627Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2 gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944],Size_:29037500,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=8c1f128f-b448-44ac-b4b1-f3228328e68e name=/runtime.v1.ImageService/ImageStatus
	Mar 11 13:15:59 ha-992796 crio[637]: time="2024-03-11 13:15:59.959232451Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=01a84e84-63df-4e47-98ac-c90096ee68cb name=/runtime.v1.RuntimeService/CreateContainer
	Mar 11 13:15:59 ha-992796 crio[637]: time="2024-03-11 13:15:59.959406133Z" level=warning msg="Allowed annotations are specified for workload []"
	Mar 11 13:15:59 ha-992796 crio[637]: time="2024-03-11 13:15:59.973835080Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/69ccff250c7e9ffb3613b0450dcada1203531f7dd505bcb0146d1e976cf92695/merged/etc/passwd: no such file or directory"
	Mar 11 13:15:59 ha-992796 crio[637]: time="2024-03-11 13:15:59.973880658Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/69ccff250c7e9ffb3613b0450dcada1203531f7dd505bcb0146d1e976cf92695/merged/etc/group: no such file or directory"
	Mar 11 13:16:00 ha-992796 crio[637]: time="2024-03-11 13:16:00.115699496Z" level=info msg="Created container 391c4a72d69b992b3d5dcdc4bfba8e8efd1b7888bef6a6491435a76e393b31e1: kube-system/storage-provisioner/storage-provisioner" id=01a84e84-63df-4e47-98ac-c90096ee68cb name=/runtime.v1.RuntimeService/CreateContainer
	Mar 11 13:16:00 ha-992796 crio[637]: time="2024-03-11 13:16:00.127018321Z" level=info msg="Starting container: 391c4a72d69b992b3d5dcdc4bfba8e8efd1b7888bef6a6491435a76e393b31e1" id=55675a02-75d7-4845-9460-fce65aef9d3f name=/runtime.v1.RuntimeService/StartContainer
	Mar 11 13:16:00 ha-992796 crio[637]: time="2024-03-11 13:16:00.186771733Z" level=info msg="Started container" PID=1870 containerID=391c4a72d69b992b3d5dcdc4bfba8e8efd1b7888bef6a6491435a76e393b31e1 description=kube-system/storage-provisioner/storage-provisioner id=55675a02-75d7-4845-9460-fce65aef9d3f name=/runtime.v1.RuntimeService/StartContainer sandboxID=ad4bef09fc2cce25c9b9371e32ee757c350a49df010b2ddcda794ae78a707106
	Mar 11 13:16:16 ha-992796 crio[637]: time="2024-03-11 13:16:16.710446698Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.28.4" id=f5c519fb-d0bf-4521-b939-fd6965487dd0 name=/runtime.v1.ImageService/ImageStatus
	Mar 11 13:16:16 ha-992796 crio[637]: time="2024-03-11 13:16:16.710714539Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b,RepoTags:[registry.k8s.io/kube-controller-manager:v1.28.4],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e],Size_:117252916,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=f5c519fb-d0bf-4521-b939-fd6965487dd0 name=/runtime.v1.ImageService/ImageStatus
	Mar 11 13:16:16 ha-992796 crio[637]: time="2024-03-11 13:16:16.711416880Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.28.4" id=c02e51e3-1f37-433d-bf57-126e71e128ac name=/runtime.v1.ImageService/ImageStatus
	Mar 11 13:16:16 ha-992796 crio[637]: time="2024-03-11 13:16:16.711626523Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b,RepoTags:[registry.k8s.io/kube-controller-manager:v1.28.4],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e],Size_:117252916,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=c02e51e3-1f37-433d-bf57-126e71e128ac name=/runtime.v1.ImageService/ImageStatus
	Mar 11 13:16:16 ha-992796 crio[637]: time="2024-03-11 13:16:16.712783392Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-992796/kube-controller-manager" id=05e7904d-0405-44b2-9342-a82b95218e67 name=/runtime.v1.RuntimeService/CreateContainer
	Mar 11 13:16:16 ha-992796 crio[637]: time="2024-03-11 13:16:16.712877206Z" level=warning msg="Allowed annotations are specified for workload []"
	Mar 11 13:16:16 ha-992796 crio[637]: time="2024-03-11 13:16:16.803351536Z" level=info msg="Created container 77b7a578bb728ee03b7c3e215623bef962ea0af0d84f4ff03b992c328f877f56: kube-system/kube-controller-manager-ha-992796/kube-controller-manager" id=05e7904d-0405-44b2-9342-a82b95218e67 name=/runtime.v1.RuntimeService/CreateContainer
	Mar 11 13:16:16 ha-992796 crio[637]: time="2024-03-11 13:16:16.803971195Z" level=info msg="Starting container: 77b7a578bb728ee03b7c3e215623bef962ea0af0d84f4ff03b992c328f877f56" id=45d1633c-61e4-4666-82d0-e0f6c10606cc name=/runtime.v1.RuntimeService/StartContainer
	Mar 11 13:16:16 ha-992796 crio[637]: time="2024-03-11 13:16:16.814808852Z" level=info msg="Started container" PID=1909 containerID=77b7a578bb728ee03b7c3e215623bef962ea0af0d84f4ff03b992c328f877f56 description=kube-system/kube-controller-manager-ha-992796/kube-controller-manager id=45d1633c-61e4-4666-82d0-e0f6c10606cc name=/runtime.v1.RuntimeService/StartContainer sandboxID=98e87961f7037a575c5c20d57c5e0eb2a4a5e9c35b00a059ceb9465152599c1e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	77b7a578bb728       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b   21 seconds ago       Running             kube-controller-manager   8                   98e87961f7037       kube-controller-manager-ha-992796
	391c4a72d69b9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   38 seconds ago       Running             storage-provisioner       5                   ad4bef09fc2cc       storage-provisioner
	16a7062ec957e       adf781c1312f06f9d22bfc391f48c68e39ed1bfe4166c6ec09faea1a89f23d46   39 seconds ago       Running             kube-vip                  5                   7eebdcd898269       kube-vip-ha-992796
	0852bbc3f3fdf       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419   44 seconds ago       Running             kube-apiserver            4                   1be2c24792065       kube-apiserver-ha-992796
	9c840f51494cc       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   About a minute ago   Running             busybox                   2                   bbac4b4891e7f       busybox-5b5d89c9d6-x8wg7
	fb7248700f6f9       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39   About a minute ago   Running             kube-proxy                2                   823a05138771e       kube-proxy-2p8p9
	e155034c259f4       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   About a minute ago   Running             coredns                   2                   ddeef526e9caa       coredns-5dd5756b68-mqfn8
	04a8537e94af1       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   About a minute ago   Running             coredns                   2                   839c1932cf89d       coredns-5dd5756b68-2qpt7
	92a2cfe26a51d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   About a minute ago   Exited              storage-provisioner       4                   ad4bef09fc2cc       storage-provisioner
	e551cef484173       4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d   About a minute ago   Running             kindnet-cni               2                   471be04edf29a       kindnet-64dzs
	c80f1ba382404       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b   About a minute ago   Exited              kube-controller-manager   7                   98e87961f7037       kube-controller-manager-ha-992796
	79e79abf80700       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419   About a minute ago   Exited              kube-apiserver            3                   1be2c24792065       kube-apiserver-ha-992796
	3512b32ca1b2f       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54   About a minute ago   Running             kube-scheduler            2                   fa990f0735bd9       kube-scheduler-ha-992796
	c93ff78a6b0b5       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   About a minute ago   Running             etcd                      2                   09ccfdac63b1f       etcd-ha-992796
	7d95f61055560       adf781c1312f06f9d22bfc391f48c68e39ed1bfe4166c6ec09faea1a89f23d46   About a minute ago   Exited              kube-vip                  4                   7eebdcd898269       kube-vip-ha-992796
	
	
	==> coredns [04a8537e94af158f6b7e7fac1f4b34e4c9659d28e50708c05a5097b06a4e9c7b] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50306 - 43089 "HINFO IN 7952656483151928217.902090019458199491. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.023333608s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e155034c259f41d2d171decf4bfc01e66b4c6d4812965d1984a277b2eac8eafd] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:44482 - 48500 "HINFO IN 5039219207336065239.7846641029935320923. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020035429s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-992796
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-992796
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f02234404c3608d31811fa9c1f2f7d976b3e563
	                    minikube.k8s.io/name=ha-992796
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_11T13_06_14_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 13:06:10 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-992796
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 13:15:47 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 11 Mar 2024 13:15:27 +0000   Mon, 11 Mar 2024 13:16:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 11 Mar 2024 13:15:27 +0000   Mon, 11 Mar 2024 13:16:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 11 Mar 2024 13:15:27 +0000   Mon, 11 Mar 2024 13:16:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 11 Mar 2024 13:15:27 +0000   Mon, 11 Mar 2024 13:16:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-992796
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	System Info:
	  Machine ID:                 910ea16139634899ba7844b8976bfd67
	  System UUID:                503618c5-c06d-415f-ab9b-afdff507d327
	  Boot ID:                    ac1cf86e-1c30-4f1a-912c-77e6f73db4d1
	  Kernel Version:             5.15.0-1055-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-x8wg7             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m16s
	  kube-system                 coredns-5dd5756b68-2qpt7             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     10m
	  kube-system                 coredns-5dd5756b68-mqfn8             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     10m
	  kube-system                 etcd-ha-992796                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-64dzs                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      10m
	  kube-system                 kube-apiserver-ha-992796             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-992796    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-2p8p9                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-ha-992796             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-992796                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             290Mi (3%!)(MISSING)  390Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 69s                    kube-proxy       
	  Normal  Starting                 4m54s                  kube-proxy       
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node ha-992796 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node ha-992796 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node ha-992796 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           10m                    node-controller  Node ha-992796 event: Registered Node ha-992796 in Controller
	  Normal  NodeReady                10m                    kubelet          Node ha-992796 status is now: NodeReady
	  Normal  RegisteredNode           9m22s                  node-controller  Node ha-992796 event: Registered Node ha-992796 in Controller
	  Normal  RegisteredNode           8m39s                  node-controller  Node ha-992796 event: Registered Node ha-992796 in Controller
	  Normal  NodeHasSufficientPID     5m45s (x8 over 5m45s)  kubelet          Node ha-992796 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m45s (x8 over 5m45s)  kubelet          Node ha-992796 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 5m45s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m45s (x8 over 5m45s)  kubelet          Node ha-992796 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           4m58s                  node-controller  Node ha-992796 event: Registered Node ha-992796 in Controller
	  Normal  RegisteredNode           4m5s                   node-controller  Node ha-992796 event: Registered Node ha-992796 in Controller
	  Normal  RegisteredNode           3m31s                  node-controller  Node ha-992796 event: Registered Node ha-992796 in Controller
	  Normal  Starting                 118s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  118s (x8 over 118s)    kubelet          Node ha-992796 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s (x8 over 118s)    kubelet          Node ha-992796 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s (x8 over 118s)    kubelet          Node ha-992796 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           66s                    node-controller  Node ha-992796 event: Registered Node ha-992796 in Controller
	  Normal  RegisteredNode           9s                     node-controller  Node ha-992796 event: Registered Node ha-992796 in Controller
	  Normal  NodeNotReady             6s                     node-controller  Node ha-992796 status is now: NodeNotReady
	
	
	Name:               ha-992796-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-992796-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f02234404c3608d31811fa9c1f2f7d976b3e563
	                    minikube.k8s.io/name=ha-992796
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_11T13_07_04_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 13:06:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-992796-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 13:16:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 13:15:20 +0000   Mon, 11 Mar 2024 13:06:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 13:15:20 +0000   Mon, 11 Mar 2024 13:06:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 13:15:20 +0000   Mon, 11 Mar 2024 13:06:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 13:15:20 +0000   Mon, 11 Mar 2024 13:07:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-992796-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	System Info:
	  Machine ID:                 b07cbcd43c414dff9c3634aac1df0255
	  System UUID:                a264857c-3e2c-4b1d-9ca0-ffed68872cf7
	  Boot ID:                    ac1cf86e-1c30-4f1a-912c-77e6f73db4d1
	  Kernel Version:             5.15.0-1055-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-slj6k                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m16s
	  kube-system                 etcd-ha-992796-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         9m50s
	  kube-system                 kindnet-pgt46                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9m51s
	  kube-system                 kube-apiserver-ha-992796-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m50s
	  kube-system                 kube-controller-manager-ha-992796-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m32s
	  kube-system                 kube-proxy-6dzwv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m51s
	  kube-system                 kube-scheduler-ha-992796-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m42s
	  kube-system                 kube-vip-ha-992796-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (1%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 29s                    kube-proxy       
	  Normal  Starting                 5m1s                   kube-proxy       
	  Normal  Starting                 6m36s                  kube-proxy       
	  Normal  Starting                 9m32s                  kube-proxy       
	  Normal  RegisteredNode           9m22s                  node-controller  Node ha-992796-m02 event: Registered Node ha-992796-m02 in Controller
	  Normal  RegisteredNode           8m39s                  node-controller  Node ha-992796-m02 event: Registered Node ha-992796-m02 in Controller
	  Normal  NodeHasNoDiskPressure    7m5s (x8 over 7m5s)    kubelet          Node ha-992796-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m5s (x8 over 7m5s)    kubelet          Node ha-992796-m02 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m5s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m5s (x8 over 7m5s)    kubelet          Node ha-992796-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m42s (x8 over 5m42s)  kubelet          Node ha-992796-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 5m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m42s (x8 over 5m42s)  kubelet          Node ha-992796-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     5m42s (x8 over 5m42s)  kubelet          Node ha-992796-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m58s                  node-controller  Node ha-992796-m02 event: Registered Node ha-992796-m02 in Controller
	  Normal  RegisteredNode           4m5s                   node-controller  Node ha-992796-m02 event: Registered Node ha-992796-m02 in Controller
	  Normal  RegisteredNode           3m31s                  node-controller  Node ha-992796-m02 event: Registered Node ha-992796-m02 in Controller
	  Normal  Starting                 116s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  116s (x8 over 116s)    kubelet          Node ha-992796-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s (x8 over 116s)    kubelet          Node ha-992796-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s (x8 over 116s)    kubelet          Node ha-992796-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           66s                    node-controller  Node ha-992796-m02 event: Registered Node ha-992796-m02 in Controller
	  Normal  RegisteredNode           9s                     node-controller  Node ha-992796-m02 event: Registered Node ha-992796-m02 in Controller
	
	
	Name:               ha-992796-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-992796-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f02234404c3608d31811fa9c1f2f7d976b3e563
	                    minikube.k8s.io/name=ha-992796
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_11T13_08_48_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 13:08:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-992796-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 13:16:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 13:16:19 +0000   Mon, 11 Mar 2024 13:16:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 13:16:19 +0000   Mon, 11 Mar 2024 13:16:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 13:16:19 +0000   Mon, 11 Mar 2024 13:16:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 13:16:19 +0000   Mon, 11 Mar 2024 13:16:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-992796-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	System Info:
	  Machine ID:                 b18b1ced5e154dccb063c39f55641dff
	  System UUID:                84a66343-af7a-486b-a043-11aad6ff8905
	  Boot ID:                    ac1cf86e-1c30-4f1a-912c-77e6f73db4d1
	  Kernel Version:             5.15.0-1055-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-4r8xk    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m54s
	  kube-system                 kindnet-rxqfx               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      7m53s
	  kube-system                 kube-proxy-5rxbt            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m50s                  kube-proxy       
	  Normal  Starting                 10s                    kube-proxy       
	  Normal  Starting                 2m54s                  kube-proxy       
	  Normal  NodeHasSufficientPID     7m53s                  kubelet          Node ha-992796-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    7m53s                  kubelet          Node ha-992796-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  7m53s                  kubelet          Node ha-992796-m04 status is now: NodeHasSufficientMemory
	  Normal  Starting                 7m53s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m53s                  node-controller  Node ha-992796-m04 event: Registered Node ha-992796-m04 in Controller
	  Normal  RegisteredNode           7m50s                  node-controller  Node ha-992796-m04 event: Registered Node ha-992796-m04 in Controller
	  Normal  NodeReady                7m49s                  kubelet          Node ha-992796-m04 status is now: NodeReady
	  Normal  RegisteredNode           7m49s                  node-controller  Node ha-992796-m04 event: Registered Node ha-992796-m04 in Controller
	  Normal  RegisteredNode           4m59s                  node-controller  Node ha-992796-m04 event: Registered Node ha-992796-m04 in Controller
	  Normal  NodeNotReady             4m19s                  node-controller  Node ha-992796-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-992796-m04 event: Registered Node ha-992796-m04 in Controller
	  Normal  RegisteredNode           3m32s                  node-controller  Node ha-992796-m04 event: Registered Node ha-992796-m04 in Controller
	  Normal  Starting                 3m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m12s (x8 over 3m24s)  kubelet          Node ha-992796-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m12s (x8 over 3m24s)  kubelet          Node ha-992796-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m12s (x8 over 3m24s)  kubelet          Node ha-992796-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           67s                    node-controller  Node ha-992796-m04 event: Registered Node ha-992796-m04 in Controller
	  Normal  Starting                 33s                    kubelet          Starting kubelet.
	  Normal  NodeNotReady             27s                    node-controller  Node ha-992796-m04 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  20s (x8 over 33s)      kubelet          Node ha-992796-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 33s)      kubelet          Node ha-992796-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x8 over 33s)      kubelet          Node ha-992796-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10s                    node-controller  Node ha-992796-m04 event: Registered Node ha-992796-m04 in Controller
	
	
	==> dmesg <==
	[  +0.001096] FS-Cache: O-key=[8] '1373ed0000000000'
	[  +0.000813] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001048] FS-Cache: N-cookie d=00000000b3298906{9p.inode} n=0000000019e50f85
	[  +0.001038] FS-Cache: N-key=[8] '1373ed0000000000'
	[  +0.002987] FS-Cache: Duplicate cookie detected
	[  +0.000705] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.000969] FS-Cache: O-cookie d=00000000b3298906{9p.inode} n=00000000d1ccedc3
	[  +0.001199] FS-Cache: O-key=[8] '1373ed0000000000'
	[  +0.000709] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000949] FS-Cache: N-cookie d=00000000b3298906{9p.inode} n=00000000109b759e
	[  +0.001182] FS-Cache: N-key=[8] '1373ed0000000000'
	[  +1.933763] FS-Cache: Duplicate cookie detected
	[  +0.000845] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.000973] FS-Cache: O-cookie d=00000000b3298906{9p.inode} n=00000000a0b21b75
	[  +0.001269] FS-Cache: O-key=[8] '1273ed0000000000'
	[  +0.000837] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001048] FS-Cache: N-cookie d=00000000b3298906{9p.inode} n=0000000019e50f85
	[  +0.001144] FS-Cache: N-key=[8] '1273ed0000000000'
	[  +0.354055] FS-Cache: Duplicate cookie detected
	[  +0.000804] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.000982] FS-Cache: O-cookie d=00000000b3298906{9p.inode} n=0000000009a88b8f
	[  +0.001119] FS-Cache: O-key=[8] '1873ed0000000000'
	[  +0.000728] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000973] FS-Cache: N-cookie d=00000000b3298906{9p.inode} n=000000005cfb581e
	[  +0.001089] FS-Cache: N-key=[8] '1873ed0000000000'
	
	
	==> etcd [c93ff78a6b0b575d1cc067fc137c1ebaf784aef804401d94c68f0a2b32492ab3] <==
	{"level":"warn","ts":"2024-03-11T13:15:16.737252Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-11T13:15:05.63248Z","time spent":"11.104758369s","remote":"127.0.0.1:37344","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/replicasets/\" range_end:\"/registry/replicasets0\" limit:10000 "}
	{"level":"info","ts":"2024-03-11T13:15:16.737285Z","caller":"traceutil/trace.go:171","msg":"trace[148044398] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/; range_end:/registry/apiextensions.k8s.io/customresourcedefinitions0; response_count:0; response_revision:2420; }","duration":"4.140178285s","start":"2024-03-11T13:15:12.597101Z","end":"2024-03-11T13:15:16.737279Z","steps":["trace[148044398] 'agreement among raft nodes before linearized reading'  (duration: 4.126015719s)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T13:15:16.737308Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-11T13:15:12.597085Z","time spent":"4.140216816s","remote":"127.0.0.1:36928","response type":"/etcdserverpb.KV/Range","request count":0,"request size":121,"response count":0,"response size":29,"request content":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" limit:10000 "}
	{"level":"info","ts":"2024-03-11T13:15:16.73707Z","caller":"traceutil/trace.go:171","msg":"trace[1926267596] range","detail":"{range_begin:/registry/roles/; range_end:/registry/roles0; }","duration":"11.104165383s","start":"2024-03-11T13:15:05.632893Z","end":"2024-03-11T13:15:16.737058Z","steps":["trace[1926267596] 'agreement among raft nodes before linearized reading'  (duration: 11.080195484s)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T13:15:16.73881Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-11T13:15:05.632889Z","time spent":"11.105906974s","remote":"127.0.0.1:37174","response type":"/etcdserverpb.KV/Range","request count":0,"request size":39,"response count":0,"response size":0,"request content":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" limit:10000 "}
	{"level":"info","ts":"2024-03-11T13:15:16.737093Z","caller":"traceutil/trace.go:171","msg":"trace[169563349] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; }","duration":"11.104227946s","start":"2024-03-11T13:15:05.632861Z","end":"2024-03-11T13:15:16.737089Z","steps":["trace[169563349] 'agreement among raft nodes before linearized reading'  (duration: 11.080239061s)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T13:15:16.738919Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-11T13:15:05.632858Z","time spent":"11.106054261s","remote":"127.0.0.1:37156","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" limit:10000 "}
	{"level":"info","ts":"2024-03-11T13:15:16.737105Z","caller":"traceutil/trace.go:171","msg":"trace[2025990670] range","detail":"{range_begin:/registry/poddisruptionbudgets/; range_end:/registry/poddisruptionbudgets0; }","duration":"11.104263268s","start":"2024-03-11T13:15:05.632838Z","end":"2024-03-11T13:15:16.737101Z","steps":["trace[2025990670] 'agreement among raft nodes before linearized reading'  (duration: 11.080274112s)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T13:15:16.739148Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-11T13:15:05.632835Z","time spent":"11.106294025s","remote":"127.0.0.1:37166","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":0,"response size":0,"request content":"key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" limit:10000 "}
	{"level":"info","ts":"2024-03-11T13:15:16.737116Z","caller":"traceutil/trace.go:171","msg":"trace[1302388947] range","detail":"{range_begin:/registry/pods/; range_end:/registry/pods0; }","duration":"11.104298368s","start":"2024-03-11T13:15:05.632814Z","end":"2024-03-11T13:15:16.737112Z","steps":["trace[1302388947] 'agreement among raft nodes before linearized reading'  (duration: 11.08030968s)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T13:15:16.739379Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-11T13:15:05.632811Z","time spent":"11.106558929s","remote":"127.0.0.1:37062","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":0,"response size":0,"request content":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" limit:10000 "}
	{"level":"info","ts":"2024-03-11T13:15:16.737128Z","caller":"traceutil/trace.go:171","msg":"trace[689998717] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; }","duration":"11.104342232s","start":"2024-03-11T13:15:05.632781Z","end":"2024-03-11T13:15:16.737124Z","steps":["trace[689998717] 'agreement among raft nodes before linearized reading'  (duration: 11.080357089s)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T13:15:16.7395Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-11T13:15:05.632779Z","time spent":"11.106706182s","remote":"127.0.0.1:37060","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":0,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:10000 "}
	{"level":"info","ts":"2024-03-11T13:15:16.737138Z","caller":"traceutil/trace.go:171","msg":"trace[1392167936] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; }","duration":"11.10437716s","start":"2024-03-11T13:15:05.632758Z","end":"2024-03-11T13:15:16.737135Z","steps":["trace[1392167936] 'agreement among raft nodes before linearized reading'  (duration: 11.080392747s)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T13:15:16.739662Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-11T13:15:05.632755Z","time spent":"11.106898112s","remote":"127.0.0.1:37072","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" limit:10000 "}
	{"level":"info","ts":"2024-03-11T13:15:16.73715Z","caller":"traceutil/trace.go:171","msg":"trace[1007208736] range","detail":"{range_begin:/registry/services/endpoints/; range_end:/registry/services/endpoints0; }","duration":"11.104418834s","start":"2024-03-11T13:15:05.632727Z","end":"2024-03-11T13:15:16.737146Z","steps":["trace[1007208736] 'agreement among raft nodes before linearized reading'  (duration: 11.080438104s)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T13:15:16.739796Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-11T13:15:05.632709Z","time spent":"11.107079162s","remote":"127.0.0.1:37058","response type":"/etcdserverpb.KV/Range","request count":0,"request size":65,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" limit:10000 "}
	{"level":"info","ts":"2024-03-11T13:15:16.737161Z","caller":"traceutil/trace.go:171","msg":"trace[453318678] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; }","duration":"11.104467185s","start":"2024-03-11T13:15:05.632691Z","end":"2024-03-11T13:15:16.737158Z","steps":["trace[453318678] 'agreement among raft nodes before linearized reading'  (duration: 11.08048703s)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T13:15:16.739929Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-11T13:15:05.632687Z","time spent":"11.10723384s","remote":"127.0.0.1:37356","response type":"/etcdserverpb.KV/Range","request count":0,"request size":87,"response count":0,"response size":0,"request content":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" limit:10000 "}
	{"level":"info","ts":"2024-03-11T13:15:16.737174Z","caller":"traceutil/trace.go:171","msg":"trace[190494360] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; }","duration":"11.104858592s","start":"2024-03-11T13:15:05.632311Z","end":"2024-03-11T13:15:16.73717Z","steps":["trace[190494360] 'agreement among raft nodes before linearized reading'  (duration: 11.076091165s)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T13:15:16.740055Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-11T13:15:05.632308Z","time spent":"11.107739592s","remote":"127.0.0.1:37018","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":0,"response size":0,"request content":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" limit:10000 "}
	{"level":"info","ts":"2024-03-11T13:15:16.737184Z","caller":"traceutil/trace.go:171","msg":"trace[1171180334] range","detail":"{range_begin:/registry/statefulsets/; range_end:/registry/statefulsets0; }","duration":"11.104619706s","start":"2024-03-11T13:15:05.632561Z","end":"2024-03-11T13:15:16.737181Z","steps":["trace[1171180334] 'agreement among raft nodes before linearized reading'  (duration: 11.080629926s)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T13:15:16.740174Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-11T13:15:05.632559Z","time spent":"11.107608477s","remote":"127.0.0.1:37326","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":0,"response size":0,"request content":"key:\"/registry/statefulsets/\" range_end:\"/registry/statefulsets0\" limit:10000 "}
	{"level":"info","ts":"2024-03-11T13:15:16.737199Z","caller":"traceutil/trace.go:171","msg":"trace[519694455] range","detail":"{range_begin:/registry/daemonsets/; range_end:/registry/daemonsets0; }","duration":"11.104644272s","start":"2024-03-11T13:15:05.632548Z","end":"2024-03-11T13:15:16.737192Z","steps":["trace[519694455] 'agreement among raft nodes before linearized reading'  (duration: 11.08065583s)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T13:15:16.740293Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-11T13:15:05.632535Z","time spent":"11.107751407s","remote":"127.0.0.1:37332","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":0,"response size":0,"request content":"key:\"/registry/daemonsets/\" range_end:\"/registry/daemonsets0\" limit:10000 "}
	
	
	==> kernel <==
	 13:16:39 up  4:59,  0 users,  load average: 1.78, 2.53, 2.62
	Linux ha-992796 5.15.0-1055-aws #60~20.04.1-Ubuntu SMP Thu Feb 22 15:54:21 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [e551cef4841736c7178396f943514f451b7f8bb321b8dc099aabb43328fc283c] <==
	I0311 13:15:59.764728       1 main.go:227] handling current node
	I0311 13:15:59.768231       1 main.go:223] Handling node with IPs: map[192.168.49.3:{}]
	I0311 13:15:59.768265       1 main.go:250] Node ha-992796-m02 has CIDR [10.244.1.0/24] 
	I0311 13:15:59.768413       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.49.3 Flags: [] Table: 0} 
	I0311 13:15:59.768486       1 main.go:223] Handling node with IPs: map[192.168.49.5:{}]
	I0311 13:15:59.768500       1 main.go:250] Node ha-992796-m04 has CIDR [10.244.3.0/24] 
	I0311 13:15:59.768551       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.49.5 Flags: [] Table: 0} 
	I0311 13:16:09.783455       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 13:16:09.783572       1 main.go:227] handling current node
	I0311 13:16:09.783606       1 main.go:223] Handling node with IPs: map[192.168.49.3:{}]
	I0311 13:16:09.783649       1 main.go:250] Node ha-992796-m02 has CIDR [10.244.1.0/24] 
	I0311 13:16:09.783779       1 main.go:223] Handling node with IPs: map[192.168.49.5:{}]
	I0311 13:16:09.783823       1 main.go:250] Node ha-992796-m04 has CIDR [10.244.3.0/24] 
	I0311 13:16:19.804209       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 13:16:19.804241       1 main.go:227] handling current node
	I0311 13:16:19.804260       1 main.go:223] Handling node with IPs: map[192.168.49.3:{}]
	I0311 13:16:19.804266       1 main.go:250] Node ha-992796-m02 has CIDR [10.244.1.0/24] 
	I0311 13:16:19.804384       1 main.go:223] Handling node with IPs: map[192.168.49.5:{}]
	I0311 13:16:19.804396       1 main.go:250] Node ha-992796-m04 has CIDR [10.244.3.0/24] 
	I0311 13:16:29.810644       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 13:16:29.810672       1 main.go:227] handling current node
	I0311 13:16:29.810683       1 main.go:223] Handling node with IPs: map[192.168.49.3:{}]
	I0311 13:16:29.810693       1 main.go:250] Node ha-992796-m02 has CIDR [10.244.1.0/24] 
	I0311 13:16:29.810787       1 main.go:223] Handling node with IPs: map[192.168.49.5:{}]
	I0311 13:16:29.810799       1 main.go:250] Node ha-992796-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [0852bbc3f3fdf258b72fcc4b7b5617fdbaf789b55b64205e570b8ffe5d785b21] <==
	I0311 13:15:57.384423       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0311 13:15:57.893084       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0311 13:15:57.899578       1 aggregator.go:166] initial CRD sync complete...
	I0311 13:15:57.900566       1 autoregister_controller.go:141] Starting autoregister controller
	I0311 13:15:57.900602       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0311 13:15:57.968802       1 controller.go:134] Starting OpenAPI controller
	I0311 13:15:57.968877       1 controller.go:85] Starting OpenAPI V3 controller
	I0311 13:15:57.968903       1 naming_controller.go:291] Starting NamingConditionController
	I0311 13:15:57.968923       1 establishing_controller.go:76] Starting EstablishingController
	I0311 13:15:57.968947       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0311 13:15:57.968966       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0311 13:15:57.968994       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0311 13:15:57.969299       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0311 13:15:57.990532       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0311 13:15:57.990621       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0311 13:15:57.990964       1 shared_informer.go:318] Caches are synced for configmaps
	I0311 13:15:57.991048       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0311 13:15:58.010810       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0311 13:15:58.064029       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0311 13:15:58.066410       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0311 13:15:58.101231       1 cache.go:39] Caches are synced for autoregister controller
	I0311 13:15:58.371452       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0311 13:15:58.816134       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I0311 13:15:58.817782       1 controller.go:624] quota admission added evaluator for: endpoints
	I0311 13:15:58.825604       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [79e79abf80700916f87e93f0d684812da355d799ef8059d2d2fb0e80e7f85fe9] <==
	I0311 13:15:19.539547       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0311 13:15:19.539673       1 aggregator.go:166] initial CRD sync complete...
	I0311 13:15:19.539712       1 autoregister_controller.go:141] Starting autoregister controller
	I0311 13:15:19.539747       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0311 13:15:19.543895       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0311 13:15:19.547480       1 trace.go:236] Trace[925350098]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:a6967808-dc46-4f78-bfae-c7e12108500b,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-4nmdc4o25fs7g2ba2k26iqxh5i,user-agent:kube-apiserver/v1.28.4 (linux/arm64) kubernetes/bae2c62,verb:PUT (11-Mar-2024 13:15:17.126) (total time: 2420ms):
	Trace[925350098]: ["GuaranteedUpdate etcd3" audit-id:a6967808-dc46-4f78-bfae-c7e12108500b,key:/leases/kube-system/apiserver-4nmdc4o25fs7g2ba2k26iqxh5i,type:*coordination.Lease,resource:leases.coordination.k8s.io 2420ms (13:15:17.127)
	Trace[925350098]:  ---"About to Encode" 2415ms (13:15:19.544)]
	Trace[925350098]: [2.420526132s] [2.420526132s] END
	I0311 13:15:19.574014       1 trace.go:236] Trace[588696402]: "Update" accept:application/json, */*,audit-id:35ba17b9-7347-452d-b668-15cc8982cbbb,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/arm64) kubernetes/$Format,verb:PUT (11-Mar-2024 13:15:16.744) (total time: 2829ms):
	Trace[588696402]: ["GuaranteedUpdate etcd3" audit-id:35ba17b9-7347-452d-b668-15cc8982cbbb,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 2828ms (13:15:16.745)
	Trace[588696402]:  ---"About to Encode" 2816ms (13:15:19.570)]
	Trace[588696402]: [2.829325979s] [2.829325979s] END
	I0311 13:15:19.737877       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0311 13:15:19.743297       1 cache.go:39] Caches are synced for autoregister controller
	I0311 13:15:19.845771       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0311 13:15:19.854790       1 cache.go:39] Caches are synced for AvailableConditionController controller
	W0311 13:15:19.863389       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I0311 13:15:19.864877       1 controller.go:624] quota admission added evaluator for: endpoints
	I0311 13:15:19.873656       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0311 13:15:19.878960       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0311 13:15:19.889806       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0311 13:15:19.939146       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0311 13:15:19.939174       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	F0311 13:15:53.038011       1 hooks.go:203] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-controller-manager [77b7a578bb728ee03b7c3e215623bef962ea0af0d84f4ff03b992c328f877f56] <==
	I0311 13:16:29.718166       1 shared_informer.go:318] Caches are synced for cronjob
	I0311 13:16:29.751122       1 shared_informer.go:318] Caches are synced for TTL
	I0311 13:16:29.754443       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0311 13:16:29.760640       1 shared_informer.go:318] Caches are synced for attach detach
	I0311 13:16:29.775535       1 shared_informer.go:318] Caches are synced for persistent volume
	I0311 13:16:29.798089       1 shared_informer.go:318] Caches are synced for resource quota
	I0311 13:16:29.828243       1 shared_informer.go:318] Caches are synced for taint
	I0311 13:16:29.828349       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0311 13:16:29.828403       1 taint_manager.go:210] "Sending events to api server"
	I0311 13:16:29.828975       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0311 13:16:29.829313       1 event.go:307] "Event occurred" object="ha-992796" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-992796 event: Registered Node ha-992796 in Controller"
	I0311 13:16:29.829460       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-992796"
	I0311 13:16:29.829511       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-992796-m02"
	I0311 13:16:29.829545       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-992796-m04"
	I0311 13:16:29.829578       1 event.go:307] "Event occurred" object="ha-992796-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-992796-m02 event: Registered Node ha-992796-m02 in Controller"
	I0311 13:16:29.829843       1 event.go:307] "Event occurred" object="ha-992796-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-992796-m04 event: Registered Node ha-992796-m04 in Controller"
	I0311 13:16:29.829821       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0311 13:16:29.857529       1 shared_informer.go:318] Caches are synced for daemon sets
	I0311 13:16:29.885888       1 shared_informer.go:318] Caches are synced for resource quota
	I0311 13:16:30.211470       1 shared_informer.go:318] Caches are synced for garbage collector
	I0311 13:16:30.211506       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0311 13:16:30.234621       1 shared_informer.go:318] Caches are synced for garbage collector
	I0311 13:16:32.709462       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-992796-m04"
	I0311 13:16:32.844578       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="65.711655ms"
	I0311 13:16:32.844755       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="122.95µs"
	
	
	==> kube-controller-manager [c80f1ba38240495d99d74354bbf2465e68cc0f0bf7c0984a915b74f0cdc9a9be] <==
	I0311 13:15:30.419826       1 serving.go:348] Generated self-signed cert in-memory
	I0311 13:15:31.715150       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0311 13:15:31.715183       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 13:15:31.716397       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0311 13:15:31.716578       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0311 13:15:31.717062       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0311 13:15:31.717134       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0311 13:15:41.738949       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/start-system-namespaces-contro
ller ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-status-available-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [fb7248700f6f9d9f0c2f79f60add6ac0b1151704ae6328f9238a6945c8e980c4] <==
	I0311 13:15:29.682656       1 server_others.go:69] "Using iptables proxy"
	I0311 13:15:29.706630       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0311 13:15:29.831857       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0311 13:15:29.833894       1 server_others.go:152] "Using iptables Proxier"
	I0311 13:15:29.836459       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0311 13:15:29.843447       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0311 13:15:29.843602       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0311 13:15:29.843885       1 server.go:846] "Version info" version="v1.28.4"
	I0311 13:15:29.844127       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 13:15:29.844809       1 config.go:188] "Starting service config controller"
	I0311 13:15:29.844871       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0311 13:15:29.844922       1 config.go:97] "Starting endpoint slice config controller"
	I0311 13:15:29.844951       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0311 13:15:29.846608       1 config.go:315] "Starting node config controller"
	I0311 13:15:29.846663       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0311 13:15:29.949437       1 shared_informer.go:318] Caches are synced for node config
	I0311 13:15:29.958044       1 shared_informer.go:318] Caches are synced for service config
	I0311 13:15:29.961501       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3512b32ca1b2f523ec5dc7cfe7b2fd3d8a395771a5e3352a3eb0700dc127b014] <==
	E0311 13:15:13.118286       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0311 13:15:14.038412       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0311 13:15:14.038445       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0311 13:15:14.358050       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0311 13:15:14.358084       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0311 13:15:15.417465       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0311 13:15:15.417498       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0311 13:15:16.013267       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0311 13:15:16.013305       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0311 13:15:36.822308       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0311 13:15:57.913692       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy) - error from a previous attempt: read tcp 192.168.49.2:51568->192.168.49.2:8443: read: connection reset by peer
	E0311 13:15:57.913773       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.168.49.2:51526->192.168.49.2:8443: read: connection reset by peer
	E0311 13:15:57.913814       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:51636->192.168.49.2:8443: read: connection reset by peer
	E0311 13:15:57.913936       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps) - error from a previous attempt: read tcp 192.168.49.2:51640->192.168.49.2:8443: read: connection reset by peer
	E0311 13:15:57.913973       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims) - error from a previous attempt: read tcp 192.168.49.2:51620->192.168.49.2:8443: read: connection reset by peer
	E0311 13:15:57.914017       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:51604->192.168.49.2:8443: read: connection reset by peer
	E0311 13:15:57.914053       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: unknown (get namespaces) - error from a previous attempt: read tcp 192.168.49.2:51600->192.168.49.2:8443: read: connection reset by peer
	E0311 13:15:57.914088       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps) - error from a previous attempt: read tcp 192.168.49.2:51558->192.168.49.2:8443: read: connection reset by peer
	E0311 13:15:57.914123       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes) - error from a previous attempt: read tcp 192.168.49.2:51510->192.168.49.2:8443: read: connection reset by peer
	E0311 13:15:57.914160       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers) - error from a previous attempt: read tcp 192.168.49.2:51650->192.168.49.2:8443: read: connection reset by peer
	E0311 13:15:57.914201       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: unknown (get pods) - error from a previous attempt: read tcp 192.168.49.2:51562->192.168.49.2:8443: read: connection reset by peer
	E0311 13:15:57.914242       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:51590->192.168.49.2:8443: read: connection reset by peer
	E0311 13:15:57.914277       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:51582->192.168.49.2:8443: read: connection reset by peer
	E0311 13:15:57.914308       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: unknown (get nodes) - error from a previous attempt: read tcp 192.168.49.2:51542->192.168.49.2:8443: read: connection reset by peer
	E0311 13:15:57.914366       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.2:51656->192.168.49.2:8443: read: connection reset by peer
	
	
	==> kubelet <==
	Mar 11 13:15:41 ha-992796 kubelet[748]: E0311 13:15:41.907772     748 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-992796_kube-system(bbc8782f15dd47abf7a0f4bc9ad53253)\"" pod="kube-system/kube-controller-manager-ha-992796" podUID="bbc8782f15dd47abf7a0f4bc9ad53253"
	Mar 11 13:15:42 ha-992796 kubelet[748]: I0311 13:15:42.910758     748 scope.go:117] "RemoveContainer" containerID="c80f1ba38240495d99d74354bbf2465e68cc0f0bf7c0984a915b74f0cdc9a9be"
	Mar 11 13:15:42 ha-992796 kubelet[748]: E0311 13:15:42.911242     748 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-992796_kube-system(bbc8782f15dd47abf7a0f4bc9ad53253)\"" pod="kube-system/kube-controller-manager-ha-992796" podUID="bbc8782f15dd47abf7a0f4bc9ad53253"
	Mar 11 13:15:51 ha-992796 kubelet[748]: I0311 13:15:51.053016     748 scope.go:117] "RemoveContainer" containerID="c80f1ba38240495d99d74354bbf2465e68cc0f0bf7c0984a915b74f0cdc9a9be"
	Mar 11 13:15:51 ha-992796 kubelet[748]: E0311 13:15:51.053663     748 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-992796_kube-system(bbc8782f15dd47abf7a0f4bc9ad53253)\"" pod="kube-system/kube-controller-manager-ha-992796" podUID="bbc8782f15dd47abf7a0f4bc9ad53253"
	Mar 11 13:15:53 ha-992796 kubelet[748]: I0311 13:15:53.932707     748 scope.go:117] "RemoveContainer" containerID="79e79abf80700916f87e93f0d684812da355d799ef8059d2d2fb0e80e7f85fe9"
	Mar 11 13:15:53 ha-992796 kubelet[748]: I0311 13:15:53.933232     748 status_manager.go:853] "Failed to get status for pod" podUID="7a6124d74e4bf594b6ca7bf4ba542cb7" pod="kube-system/kube-apiserver-ha-992796" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-992796\": dial tcp 192.168.49.254:8443: connect: connection refused"
	Mar 11 13:15:53 ha-992796 kubelet[748]: E0311 13:15:53.936305     748 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ha-992796.17bbb815cb162f88", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"2546", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ha-992796", UID:"7a6124d74e4bf594b6ca7bf4ba542cb7", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Pulled", Message:"Container image \"registry.k8s.io/kube-apiserver:v1.28.4\" already present on machine", Source:v1.EventS
ource{Component:"kubelet", Host:"ha-992796"}, FirstTimestamp:time.Date(2024, time.March, 11, 13, 14, 47, 0, time.Local), LastTimestamp:time.Date(2024, time.March, 11, 13, 15, 53, 935579598, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ha-992796"}': 'Patch "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-992796.17bbb815cb162f88": dial tcp 192.168.49.254:8443: connect: connection refused'(may retry after sleeping)
	Mar 11 13:15:57 ha-992796 kubelet[748]: E0311 13:15:57.581597     748 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:40662->192.168.49.254:8443: read: connection reset by peer
	Mar 11 13:15:57 ha-992796 kubelet[748]: E0311 13:15:57.582471     748 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:40600->192.168.49.254:8443: read: connection reset by peer
	Mar 11 13:15:57 ha-992796 kubelet[748]: E0311 13:15:57.582815     748 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:40614->192.168.49.254:8443: read: connection reset by peer
	Mar 11 13:15:57 ha-992796 kubelet[748]: E0311 13:15:57.583894     748 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:40624->192.168.49.254:8443: read: connection reset by peer
	Mar 11 13:15:58 ha-992796 kubelet[748]: I0311 13:15:58.951473     748 scope.go:117] "RemoveContainer" containerID="7d95f61055560f5e48835ca85dbc7985c0440370e41659765c51b49391a365c6"
	Mar 11 13:15:59 ha-992796 kubelet[748]: I0311 13:15:59.956347     748 scope.go:117] "RemoveContainer" containerID="92a2cfe26a51d5d703f0a56ee3550ff82b341561c10272b4a5fee62c22bf5999"
	Mar 11 13:16:01 ha-992796 kubelet[748]: I0311 13:16:01.709698     748 scope.go:117] "RemoveContainer" containerID="c80f1ba38240495d99d74354bbf2465e68cc0f0bf7c0984a915b74f0cdc9a9be"
	Mar 11 13:16:01 ha-992796 kubelet[748]: E0311 13:16:01.710230     748 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-992796_kube-system(bbc8782f15dd47abf7a0f4bc9ad53253)\"" pod="kube-system/kube-controller-manager-ha-992796" podUID="bbc8782f15dd47abf7a0f4bc9ad53253"
	Mar 11 13:16:08 ha-992796 kubelet[748]: E0311 13:16:08.151011     748 controller.go:193] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-992796?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Mar 11 13:16:08 ha-992796 kubelet[748]: E0311 13:16:08.364409     748 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"ha-992796\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-992796?resourceVersion=0&timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Mar 11 13:16:16 ha-992796 kubelet[748]: I0311 13:16:16.709749     748 scope.go:117] "RemoveContainer" containerID="c80f1ba38240495d99d74354bbf2465e68cc0f0bf7c0984a915b74f0cdc9a9be"
	Mar 11 13:16:18 ha-992796 kubelet[748]: E0311 13:16:18.155488     748 controller.go:193] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-992796?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Mar 11 13:16:18 ha-992796 kubelet[748]: E0311 13:16:18.365249     748 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"ha-992796\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-992796?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Mar 11 13:16:28 ha-992796 kubelet[748]: E0311 13:16:28.156620     748 controller.go:193] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-992796?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Mar 11 13:16:28 ha-992796 kubelet[748]: E0311 13:16:28.367049     748 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"ha-992796\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-992796?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Mar 11 13:16:38 ha-992796 kubelet[748]: E0311 13:16:38.156838     748 controller.go:193] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-992796?timeout=10s\": context deadline exceeded"
	Mar 11 13:16:38 ha-992796 kubelet[748]: E0311 13:16:38.368293     748 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"ha-992796\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-992796?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-992796 -n ha-992796
helpers_test.go:261: (dbg) Run:  kubectl --context ha-992796 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMutliControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMutliControlPlane/serial/RestartCluster (127.32s)

                                                
                                    

Test pass (301/335)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.11
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.28.4/json-events 7.31
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.09
18 TestDownloadOnly/v1.28.4/DeleteAll 0.21
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.29.0-rc.2/json-events 6.64
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.21
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.41
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.22
30 TestBinaryMirror 0.58
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
36 TestAddons/Setup 167.77
38 TestAddons/parallel/Registry 16.26
40 TestAddons/parallel/InspektorGadget 10.88
41 TestAddons/parallel/MetricsServer 6.8
44 TestAddons/parallel/CSI 67.57
45 TestAddons/parallel/Headlamp 12.43
46 TestAddons/parallel/CloudSpanner 6.56
47 TestAddons/parallel/LocalPath 53.49
48 TestAddons/parallel/NvidiaDevicePlugin 6.54
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.18
53 TestAddons/StoppedEnableDisable 12.24
54 TestCertOptions 37.32
55 TestCertExpiration 248.54
57 TestForceSystemdFlag 39.93
58 TestForceSystemdEnv 41.05
64 TestErrorSpam/setup 32.45
65 TestErrorSpam/start 0.73
66 TestErrorSpam/status 1
67 TestErrorSpam/pause 1.69
68 TestErrorSpam/unpause 1.78
69 TestErrorSpam/stop 1.48
72 TestFunctional/serial/CopySyncFile 0.01
73 TestFunctional/serial/StartWithProxy 50.62
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 28.68
76 TestFunctional/serial/KubeContext 0.06
77 TestFunctional/serial/KubectlGetPods 0.09
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.65
81 TestFunctional/serial/CacheCmd/cache/add_local 1.21
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.01
86 TestFunctional/serial/CacheCmd/cache/delete 0.14
87 TestFunctional/serial/MinikubeKubectlCmd 0.15
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
89 TestFunctional/serial/ExtraConfig 38.3
90 TestFunctional/serial/ComponentHealth 0.1
91 TestFunctional/serial/LogsCmd 1.74
92 TestFunctional/serial/LogsFileCmd 1.82
93 TestFunctional/serial/InvalidService 4.35
95 TestFunctional/parallel/ConfigCmd 0.52
96 TestFunctional/parallel/DashboardCmd 10.82
97 TestFunctional/parallel/DryRun 0.46
98 TestFunctional/parallel/InternationalLanguage 0.28
99 TestFunctional/parallel/StatusCmd 1.29
103 TestFunctional/parallel/ServiceCmdConnect 11.87
104 TestFunctional/parallel/AddonsCmd 0.34
105 TestFunctional/parallel/PersistentVolumeClaim 27.11
107 TestFunctional/parallel/SSHCmd 0.73
108 TestFunctional/parallel/CpCmd 2.39
110 TestFunctional/parallel/FileSync 0.32
111 TestFunctional/parallel/CertSync 2.16
115 TestFunctional/parallel/NodeLabels 0.1
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.6
119 TestFunctional/parallel/License 0.35
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.67
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.48
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.17
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
131 TestFunctional/parallel/ServiceCmd/DeployApp 6.28
132 TestFunctional/parallel/ServiceCmd/List 0.59
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.5
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.63
135 TestFunctional/parallel/ProfileCmd/profile_list 0.53
136 TestFunctional/parallel/ServiceCmd/HTTPS 0.46
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.56
138 TestFunctional/parallel/ServiceCmd/Format 0.57
139 TestFunctional/parallel/MountCmd/any-port 7.78
140 TestFunctional/parallel/ServiceCmd/URL 0.49
141 TestFunctional/parallel/MountCmd/specific-port 2.46
142 TestFunctional/parallel/MountCmd/VerifyCleanup 1.98
143 TestFunctional/parallel/Version/short 0.11
144 TestFunctional/parallel/Version/components 1.41
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.32
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
149 TestFunctional/parallel/ImageCommands/ImageBuild 2.82
150 TestFunctional/parallel/ImageCommands/Setup 2.38
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
154 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.06
155 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3
156 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.53
157 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.91
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
159 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.23
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.92
161 TestFunctional/delete_addon-resizer_images 0.09
162 TestFunctional/delete_my-image_image 0.02
163 TestFunctional/delete_minikube_cached_images 0.02
167 TestMutliControlPlane/serial/StartCluster 157.55
168 TestMutliControlPlane/serial/DeployApp 7.16
169 TestMutliControlPlane/serial/PingHostFromPods 1.73
170 TestMutliControlPlane/serial/AddWorkerNode 27.23
171 TestMutliControlPlane/serial/NodeLabels 0.1
172 TestMutliControlPlane/serial/HAppyAfterClusterStart 0.8
173 TestMutliControlPlane/serial/CopyFile 19.85
174 TestMutliControlPlane/serial/StopSecondaryNode 12.74
175 TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.58
176 TestMutliControlPlane/serial/RestartSecondaryNode 21.19
177 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart 16.4
178 TestMutliControlPlane/serial/RestartClusterKeepsNodes 215.13
179 TestMutliControlPlane/serial/DeleteSecondaryNode 12.04
180 TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.56
181 TestMutliControlPlane/serial/StopCluster 35.74
183 TestMutliControlPlane/serial/DegradedAfterClusterRestart 0.57
184 TestMutliControlPlane/serial/AddSecondaryNode 64.94
185 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.76
189 TestJSONOutput/start/Command 49.04
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.74
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.64
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.9
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.23
214 TestKicCustomNetwork/create_custom_network 43.95
215 TestKicCustomNetwork/use_default_bridge_network 36.67
216 TestKicExistingNetwork 33.03
217 TestKicCustomSubnet 34.47
218 TestKicStaticIP 34.07
219 TestMainNoArgs 0.07
220 TestMinikubeProfile 67.39
223 TestMountStart/serial/StartWithMountFirst 6.59
224 TestMountStart/serial/VerifyMountFirst 0.26
225 TestMountStart/serial/StartWithMountSecond 9.1
226 TestMountStart/serial/VerifyMountSecond 0.26
227 TestMountStart/serial/DeleteFirst 1.64
228 TestMountStart/serial/VerifyMountPostDelete 0.25
229 TestMountStart/serial/Stop 1.2
230 TestMountStart/serial/RestartStopped 8.97
231 TestMountStart/serial/VerifyMountPostStop 0.27
234 TestMultiNode/serial/FreshStart2Nodes 64.13
235 TestMultiNode/serial/DeployApp2Nodes 5.03
236 TestMultiNode/serial/PingHostFrom2Pods 1.07
237 TestMultiNode/serial/AddNode 19.76
238 TestMultiNode/serial/MultiNodeLabels 0.09
239 TestMultiNode/serial/ProfileList 0.36
240 TestMultiNode/serial/CopyFile 10.24
241 TestMultiNode/serial/StopNode 2.27
242 TestMultiNode/serial/StartAfterStop 9.46
243 TestMultiNode/serial/RestartKeepsNodes 84.91
244 TestMultiNode/serial/DeleteNode 5.34
245 TestMultiNode/serial/StopMultiNode 23.93
246 TestMultiNode/serial/RestartMultiNode 48.64
247 TestMultiNode/serial/ValidateNameConflict 34.13
252 TestPreload 119.11
254 TestScheduledStopUnix 106.29
257 TestInsufficientStorage 11.22
258 TestRunningBinaryUpgrade 87.05
260 TestKubernetesUpgrade 402.99
261 TestMissingContainerUpgrade 154.13
263 TestPause/serial/Start 92.38
265 TestNoKubernetes/serial/StartNoK8sWithVersion 0.14
266 TestNoKubernetes/serial/StartWithK8s 46.37
267 TestNoKubernetes/serial/StartWithStopK8s 6.99
268 TestNoKubernetes/serial/Start 9.16
269 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
270 TestNoKubernetes/serial/ProfileList 0.95
271 TestNoKubernetes/serial/Stop 1.23
272 TestNoKubernetes/serial/StartNoArgs 7.03
273 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
274 TestPause/serial/SecondStartNoReconfiguration 19.5
275 TestPause/serial/Pause 1.18
276 TestPause/serial/VerifyStatus 0.45
277 TestPause/serial/Unpause 0.89
278 TestPause/serial/PauseAgain 1.05
279 TestPause/serial/DeletePaused 3.26
280 TestPause/serial/VerifyDeletedResources 0.25
281 TestStoppedBinaryUpgrade/Setup 1.2
282 TestStoppedBinaryUpgrade/Upgrade 89.73
283 TestStoppedBinaryUpgrade/MinikubeLogs 1.31
298 TestNetworkPlugins/group/false 4.68
303 TestStartStop/group/old-k8s-version/serial/FirstStart 160.69
304 TestStartStop/group/old-k8s-version/serial/DeployApp 9.48
305 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.08
306 TestStartStop/group/old-k8s-version/serial/Stop 12.28
307 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.59
308 TestStartStop/group/old-k8s-version/serial/SecondStart 33.77
310 TestStartStop/group/no-preload/serial/FirstStart 78.51
311 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 36
312 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
313 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.3
314 TestStartStop/group/old-k8s-version/serial/Pause 3.43
316 TestStartStop/group/embed-certs/serial/FirstStart 54.67
317 TestStartStop/group/no-preload/serial/DeployApp 10.42
318 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.42
319 TestStartStop/group/no-preload/serial/Stop 12.2
320 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
321 TestStartStop/group/no-preload/serial/SecondStart 279.74
322 TestStartStop/group/embed-certs/serial/DeployApp 8.38
323 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.86
324 TestStartStop/group/embed-certs/serial/Stop 12.68
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
326 TestStartStop/group/embed-certs/serial/SecondStart 267.47
327 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
328 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
329 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
330 TestStartStop/group/no-preload/serial/Pause 3.08
332 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 53
333 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
334 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.1
335 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
336 TestStartStop/group/embed-certs/serial/Pause 4.09
338 TestStartStop/group/newest-cni/serial/FirstStart 51.37
339 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.53
340 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.43
341 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.05
342 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
343 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 271.19
344 TestStartStop/group/newest-cni/serial/DeployApp 0
345 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.44
346 TestStartStop/group/newest-cni/serial/Stop 1.37
347 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.34
348 TestStartStop/group/newest-cni/serial/SecondStart 18.11
349 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.36
352 TestStartStop/group/newest-cni/serial/Pause 3.84
353 TestNetworkPlugins/group/auto/Start 53.77
354 TestNetworkPlugins/group/auto/KubeletFlags 0.34
355 TestNetworkPlugins/group/auto/NetCatPod 11.31
356 TestNetworkPlugins/group/auto/DNS 0.19
357 TestNetworkPlugins/group/auto/Localhost 0.16
358 TestNetworkPlugins/group/auto/HairPin 0.16
359 TestNetworkPlugins/group/kindnet/Start 52.58
360 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
361 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
362 TestNetworkPlugins/group/kindnet/NetCatPod 10.29
363 TestNetworkPlugins/group/kindnet/DNS 0.19
364 TestNetworkPlugins/group/kindnet/Localhost 0.18
365 TestNetworkPlugins/group/kindnet/HairPin 0.17
366 TestNetworkPlugins/group/calico/Start 75.16
367 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.02
368 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
369 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
370 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.58
371 TestNetworkPlugins/group/custom-flannel/Start 66.53
372 TestNetworkPlugins/group/calico/ControllerPod 6.01
373 TestNetworkPlugins/group/calico/KubeletFlags 0.37
374 TestNetworkPlugins/group/calico/NetCatPod 13.4
375 TestNetworkPlugins/group/calico/DNS 0.32
376 TestNetworkPlugins/group/calico/Localhost 0.22
377 TestNetworkPlugins/group/calico/HairPin 0.26
378 TestNetworkPlugins/group/enable-default-cni/Start 48.9
379 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.52
380 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.36
381 TestNetworkPlugins/group/custom-flannel/DNS 0.23
382 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
383 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
384 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
385 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.61
386 TestNetworkPlugins/group/flannel/Start 74.5
387 TestNetworkPlugins/group/enable-default-cni/DNS 0.57
388 TestNetworkPlugins/group/enable-default-cni/Localhost 0.33
389 TestNetworkPlugins/group/enable-default-cni/HairPin 0.36
390 TestNetworkPlugins/group/bridge/Start 84.71
391 TestNetworkPlugins/group/flannel/ControllerPod 6.01
392 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
393 TestNetworkPlugins/group/flannel/NetCatPod 10.25
394 TestNetworkPlugins/group/flannel/DNS 0.18
395 TestNetworkPlugins/group/flannel/Localhost 0.16
396 TestNetworkPlugins/group/flannel/HairPin 0.17
397 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
398 TestNetworkPlugins/group/bridge/NetCatPod 10.28
399 TestNetworkPlugins/group/bridge/DNS 0.17
400 TestNetworkPlugins/group/bridge/Localhost 0.16
401 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (8.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-846348 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-846348 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.107790287s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-846348
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-846348: exit status 85 (87.720015ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-846348 | jenkins | v1.32.0 | 11 Mar 24 12:53 UTC |          |
	|         | -p download-only-846348        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 12:53:29
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 12:53:29.585948 1129911 out.go:291] Setting OutFile to fd 1 ...
	I0311 12:53:29.586100 1129911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 12:53:29.586112 1129911 out.go:304] Setting ErrFile to fd 2...
	I0311 12:53:29.586118 1129911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 12:53:29.586364 1129911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-1124504/.minikube/bin
	W0311 12:53:29.586503 1129911 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18350-1124504/.minikube/config/config.json: open /home/jenkins/minikube-integration/18350-1124504/.minikube/config/config.json: no such file or directory
	I0311 12:53:29.586889 1129911 out.go:298] Setting JSON to true
	I0311 12:53:29.587775 1129911 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16557,"bootTime":1710145053,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0311 12:53:29.587850 1129911 start.go:139] virtualization:  
	I0311 12:53:29.592814 1129911 out.go:97] [download-only-846348] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0311 12:53:29.595872 1129911 out.go:169] MINIKUBE_LOCATION=18350
	W0311 12:53:29.593093 1129911 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18350-1124504/.minikube/cache/preloaded-tarball: no such file or directory
	I0311 12:53:29.593136 1129911 notify.go:220] Checking for updates...
	I0311 12:53:29.601568 1129911 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 12:53:29.604296 1129911 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18350-1124504/kubeconfig
	I0311 12:53:29.606763 1129911 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-1124504/.minikube
	I0311 12:53:29.608472 1129911 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0311 12:53:29.612082 1129911 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0311 12:53:29.612366 1129911 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 12:53:29.632992 1129911 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0311 12:53:29.633099 1129911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 12:53:29.701256 1129911 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-11 12:53:29.692474826 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 12:53:29.701378 1129911 docker.go:295] overlay module found
	I0311 12:53:29.703426 1129911 out.go:97] Using the docker driver based on user configuration
	I0311 12:53:29.703449 1129911 start.go:297] selected driver: docker
	I0311 12:53:29.703457 1129911 start.go:901] validating driver "docker" against <nil>
	I0311 12:53:29.703553 1129911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 12:53:29.766475 1129911 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-11 12:53:29.756439883 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 12:53:29.766654 1129911 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 12:53:29.766958 1129911 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0311 12:53:29.767135 1129911 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0311 12:53:29.769752 1129911 out.go:169] Using Docker driver with root privileges
	I0311 12:53:29.771856 1129911 cni.go:84] Creating CNI manager for ""
	I0311 12:53:29.771891 1129911 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0311 12:53:29.771902 1129911 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0311 12:53:29.772027 1129911 start.go:340] cluster config:
	{Name:download-only-846348 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-846348 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 12:53:29.773915 1129911 out.go:97] Starting "download-only-846348" primary control-plane node in "download-only-846348" cluster
	I0311 12:53:29.773950 1129911 cache.go:121] Beginning downloading kic base image for docker with crio
	I0311 12:53:29.775965 1129911 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0311 12:53:29.776024 1129911 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0311 12:53:29.776099 1129911 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0311 12:53:29.790906 1129911 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0311 12:53:29.791092 1129911 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0311 12:53:29.791189 1129911 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0311 12:53:29.844518 1129911 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0311 12:53:29.844545 1129911 cache.go:56] Caching tarball of preloaded images
	I0311 12:53:29.845401 1129911 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0311 12:53:29.847554 1129911 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0311 12:53:29.847580 1129911 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0311 12:53:29.957885 1129911 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/18350-1124504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0311 12:53:34.559911 1129911 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	
	
	* The control-plane node download-only-846348 host does not exist
	  To start a cluster, run: "minikube start -p download-only-846348"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-846348
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (7.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-545020 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-545020 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.312262281s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (7.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-545020
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-545020: exit status 85 (91.390224ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-846348 | jenkins | v1.32.0 | 11 Mar 24 12:53 UTC |                     |
	|         | -p download-only-846348        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 11 Mar 24 12:53 UTC | 11 Mar 24 12:53 UTC |
	| delete  | -p download-only-846348        | download-only-846348 | jenkins | v1.32.0 | 11 Mar 24 12:53 UTC | 11 Mar 24 12:53 UTC |
	| start   | -o=json --download-only        | download-only-545020 | jenkins | v1.32.0 | 11 Mar 24 12:53 UTC |                     |
	|         | -p download-only-545020        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 12:53:38
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 12:53:38.154482 1130075 out.go:291] Setting OutFile to fd 1 ...
	I0311 12:53:38.154646 1130075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 12:53:38.154657 1130075 out.go:304] Setting ErrFile to fd 2...
	I0311 12:53:38.154662 1130075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 12:53:38.154884 1130075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-1124504/.minikube/bin
	I0311 12:53:38.155281 1130075 out.go:298] Setting JSON to true
	I0311 12:53:38.156220 1130075 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16566,"bootTime":1710145053,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0311 12:53:38.156291 1130075 start.go:139] virtualization:  
	I0311 12:53:38.164990 1130075 out.go:97] [download-only-545020] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0311 12:53:38.165196 1130075 notify.go:220] Checking for updates...
	I0311 12:53:38.175861 1130075 out.go:169] MINIKUBE_LOCATION=18350
	I0311 12:53:38.179639 1130075 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 12:53:38.182194 1130075 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18350-1124504/kubeconfig
	I0311 12:53:38.184474 1130075 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-1124504/.minikube
	I0311 12:53:38.186510 1130075 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0311 12:53:38.190598 1130075 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0311 12:53:38.190864 1130075 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 12:53:38.211678 1130075 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0311 12:53:38.211779 1130075 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 12:53:38.281620 1130075 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:50 SystemTime:2024-03-11 12:53:38.271468722 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 12:53:38.281735 1130075 docker.go:295] overlay module found
	I0311 12:53:38.284228 1130075 out.go:97] Using the docker driver based on user configuration
	I0311 12:53:38.284261 1130075 start.go:297] selected driver: docker
	I0311 12:53:38.284269 1130075 start.go:901] validating driver "docker" against <nil>
	I0311 12:53:38.284394 1130075 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 12:53:38.346962 1130075 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:50 SystemTime:2024-03-11 12:53:38.338591205 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 12:53:38.347122 1130075 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 12:53:38.347408 1130075 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0311 12:53:38.347573 1130075 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0311 12:53:38.349811 1130075 out.go:169] Using Docker driver with root privileges
	I0311 12:53:38.352126 1130075 cni.go:84] Creating CNI manager for ""
	I0311 12:53:38.352147 1130075 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0311 12:53:38.352160 1130075 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0311 12:53:38.352241 1130075 start.go:340] cluster config:
	{Name:download-only-545020 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-545020 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 12:53:38.354392 1130075 out.go:97] Starting "download-only-545020" primary control-plane node in "download-only-545020" cluster
	I0311 12:53:38.354419 1130075 cache.go:121] Beginning downloading kic base image for docker with crio
	I0311 12:53:38.356672 1130075 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0311 12:53:38.356704 1130075 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 12:53:38.356869 1130075 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0311 12:53:38.371375 1130075 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0311 12:53:38.371497 1130075 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0311 12:53:38.371522 1130075 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0311 12:53:38.371530 1130075 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0311 12:53:38.371538 1130075 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0311 12:53:38.420089 1130075 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0311 12:53:38.420118 1130075 cache.go:56] Caching tarball of preloaded images
	I0311 12:53:38.420297 1130075 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 12:53:38.422564 1130075 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0311 12:53:38.422593 1130075 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 ...
	I0311 12:53:38.533124 1130075 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4?checksum=md5:23e2271fd1a7b32f52ce36ae8363c081 -> /home/jenkins/minikube-integration/18350-1124504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-545020 host does not exist
	  To start a cluster, run: "minikube start -p download-only-545020"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-545020
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (6.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-842375 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-842375 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.643679804s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (6.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-842375
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-842375: exit status 85 (209.905363ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-846348 | jenkins | v1.32.0 | 11 Mar 24 12:53 UTC |                     |
	|         | -p download-only-846348           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 11 Mar 24 12:53 UTC | 11 Mar 24 12:53 UTC |
	| delete  | -p download-only-846348           | download-only-846348 | jenkins | v1.32.0 | 11 Mar 24 12:53 UTC | 11 Mar 24 12:53 UTC |
	| start   | -o=json --download-only           | download-only-545020 | jenkins | v1.32.0 | 11 Mar 24 12:53 UTC |                     |
	|         | -p download-only-545020           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 11 Mar 24 12:53 UTC | 11 Mar 24 12:53 UTC |
	| delete  | -p download-only-545020           | download-only-545020 | jenkins | v1.32.0 | 11 Mar 24 12:53 UTC | 11 Mar 24 12:53 UTC |
	| start   | -o=json --download-only           | download-only-842375 | jenkins | v1.32.0 | 11 Mar 24 12:53 UTC |                     |
	|         | -p download-only-842375           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 12:53:45
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 12:53:45.909989 1130233 out.go:291] Setting OutFile to fd 1 ...
	I0311 12:53:45.910190 1130233 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 12:53:45.910217 1130233 out.go:304] Setting ErrFile to fd 2...
	I0311 12:53:45.910237 1130233 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 12:53:45.910519 1130233 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-1124504/.minikube/bin
	I0311 12:53:45.910991 1130233 out.go:298] Setting JSON to true
	I0311 12:53:45.911927 1130233 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16573,"bootTime":1710145053,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0311 12:53:45.912022 1130233 start.go:139] virtualization:  
	I0311 12:53:45.914993 1130233 out.go:97] [download-only-842375] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0311 12:53:45.917366 1130233 out.go:169] MINIKUBE_LOCATION=18350
	I0311 12:53:45.915198 1130233 notify.go:220] Checking for updates...
	I0311 12:53:45.922181 1130233 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 12:53:45.924739 1130233 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18350-1124504/kubeconfig
	I0311 12:53:45.926711 1130233 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-1124504/.minikube
	I0311 12:53:45.928710 1130233 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0311 12:53:45.933447 1130233 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0311 12:53:45.933752 1130233 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 12:53:45.959222 1130233 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0311 12:53:45.959331 1130233 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 12:53:46.023300 1130233 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-11 12:53:46.011834929 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 12:53:46.023416 1130233 docker.go:295] overlay module found
	I0311 12:53:46.025621 1130233 out.go:97] Using the docker driver based on user configuration
	I0311 12:53:46.025652 1130233 start.go:297] selected driver: docker
	I0311 12:53:46.025660 1130233 start.go:901] validating driver "docker" against <nil>
	I0311 12:53:46.025769 1130233 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 12:53:46.086949 1130233 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-11 12:53:46.077333492 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 12:53:46.087125 1130233 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 12:53:46.087421 1130233 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0311 12:53:46.087574 1130233 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0311 12:53:46.089877 1130233 out.go:169] Using Docker driver with root privileges
	I0311 12:53:46.092610 1130233 cni.go:84] Creating CNI manager for ""
	I0311 12:53:46.092634 1130233 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0311 12:53:46.092645 1130233 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0311 12:53:46.092726 1130233 start.go:340] cluster config:
	{Name:download-only-842375 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-842375 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 12:53:46.094891 1130233 out.go:97] Starting "download-only-842375" primary control-plane node in "download-only-842375" cluster
	I0311 12:53:46.094919 1130233 cache.go:121] Beginning downloading kic base image for docker with crio
	I0311 12:53:46.097277 1130233 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0311 12:53:46.097303 1130233 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0311 12:53:46.097403 1130233 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0311 12:53:46.113002 1130233 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0311 12:53:46.113132 1130233 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0311 12:53:46.113157 1130233 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0311 12:53:46.113162 1130233 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0311 12:53:46.113169 1130233 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0311 12:53:46.161082 1130233 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4
	I0311 12:53:46.161109 1130233 cache.go:56] Caching tarball of preloaded images
	I0311 12:53:46.161790 1130233 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0311 12:53:46.164010 1130233 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0311 12:53:46.164034 1130233 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	I0311 12:53:46.275751 1130233 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:9d8119c6fd5c58f71de57a6fdbe27bf3 -> /home/jenkins/minikube-integration/18350-1124504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4
	I0311 12:53:50.857619 1130233 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	I0311 12:53:50.857770 1130233 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18350-1124504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-842375 host does not exist
	  To start a cluster, run: "minikube start -p download-only-842375"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-842375
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-587576 --alsologtostderr --binary-mirror http://127.0.0.1:40125 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-587576" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-587576
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-127043
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-127043: exit status 85 (88.688117ms)

                                                
                                                
-- stdout --
	* Profile "addons-127043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-127043"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-127043
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-127043: exit status 85 (88.744895ms)

                                                
                                                
-- stdout --
	* Profile "addons-127043" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-127043"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (167.77s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-127043 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-127043 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m47.768821388s)
--- PASS: TestAddons/Setup (167.77s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 57.094983ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-7wnxf" [08602d05-9b4b-4fab-ae06-5d18c7d6971b] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005459616s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-ckqms" [fbeb4621-6db7-4a4a-b294-12b96612b41d] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005678497s
addons_test.go:340: (dbg) Run:  kubectl --context addons-127043 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-127043 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-127043 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.045885476s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-127043 ip
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-127043 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.26s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.88s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-kx8dq" [cfcd486b-7106-487a-a10e-552391306153] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004048384s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-127043
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-127043: (5.875935528s)
--- PASS: TestAddons/parallel/InspektorGadget (10.88s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.8s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 5.022125ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-hlhsg" [6f000c7f-3777-4a03-a0f6-5728160d4000] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004161607s
addons_test.go:415: (dbg) Run:  kubectl --context addons-127043 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-127043 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.80s)

                                                
                                    
x
+
TestAddons/parallel/CSI (67.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 56.81051ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-127043 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/03/11 12:56:58 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-127043 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [cba0efa5-da54-4c50-97e6-ddc25d28da08] Pending
helpers_test.go:344: "task-pv-pod" [cba0efa5-da54-4c50-97e6-ddc25d28da08] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [cba0efa5-da54-4c50-97e6-ddc25d28da08] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.004310098s
addons_test.go:584: (dbg) Run:  kubectl --context addons-127043 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-127043 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-127043 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-127043 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-127043 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-127043 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-127043 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [18becdbd-309a-43d3-a0d9-e0118c9c0d11] Pending
helpers_test.go:344: "task-pv-pod-restore" [18becdbd-309a-43d3-a0d9-e0118c9c0d11] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [18becdbd-309a-43d3-a0d9-e0118c9c0d11] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004250778s
addons_test.go:626: (dbg) Run:  kubectl --context addons-127043 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-127043 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-127043 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-127043 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-127043 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.748546396s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-127043 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (67.57s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-127043 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-127043 --alsologtostderr -v=1: (1.422327499s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-hwkqk" [1a4db1bc-8cf2-44a9-bccc-e5f88deec4d3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-hwkqk" [1a4db1bc-8cf2-44a9-bccc-e5f88deec4d3] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-hwkqk" [1a4db1bc-8cf2-44a9-bccc-e5f88deec4d3] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004093819s
--- PASS: TestAddons/parallel/Headlamp (12.43s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-ld58t" [ccdac860-766b-4862-857b-f5ac333843fc] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003457754s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-127043
--- PASS: TestAddons/parallel/CloudSpanner (6.56s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.49s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-127043 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-127043 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127043 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [0ecae964-f2bb-493b-a2ec-5f87425c1500] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [0ecae964-f2bb-493b-a2ec-5f87425c1500] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [0ecae964-f2bb-493b-a2ec-5f87425c1500] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003531628s
addons_test.go:891: (dbg) Run:  kubectl --context addons-127043 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-127043 ssh "cat /opt/local-path-provisioner/pvc-99a0ddad-c9ee-45a7-ab15-bb7b3e522400_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-127043 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-127043 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-127043 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-127043 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.336498342s)
--- PASS: TestAddons/parallel/LocalPath (53.49s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-kh579" [946f3486-8140-43ba-8eb4-b1e76771c071] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.008214656s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-127043
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-fcvqn" [64a8ca28-bb59-4464-b6be-c356b7498dd5] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004256158s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-127043 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-127043 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.24s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-127043
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-127043: (11.944720026s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-127043
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-127043
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-127043
--- PASS: TestAddons/StoppedEnableDisable (12.24s)

                                                
                                    
x
+
TestCertOptions (37.32s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-929507 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E0311 13:41:43.056113 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-929507 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (34.441903851s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-929507 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-929507 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-929507 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-929507" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-929507
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-929507: (2.058585797s)
--- PASS: TestCertOptions (37.32s)

                                                
                                    
x
+
TestCertExpiration (248.54s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-686180 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-686180 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (39.827703406s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-686180 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-686180 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (25.925818906s)
helpers_test.go:175: Cleaning up "cert-expiration-686180" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-686180
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-686180: (2.783427534s)
--- PASS: TestCertExpiration (248.54s)

                                                
                                    
x
+
TestForceSystemdFlag (39.93s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-852604 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-852604 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.164463574s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-852604 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-852604" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-852604
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-852604: (2.455599801s)
--- PASS: TestForceSystemdFlag (39.93s)

                                                
                                    
x
+
TestForceSystemdEnv (41.05s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-747135 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-747135 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.670128539s)
helpers_test.go:175: Cleaning up "force-systemd-env-747135" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-747135
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-747135: (2.378558008s)
--- PASS: TestForceSystemdEnv (41.05s)

                                                
                                    
x
+
TestErrorSpam/setup (32.45s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-413614 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-413614 --driver=docker  --container-runtime=crio
E0311 13:01:43.065129 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: no such file or directory
E0311 13:01:43.075613 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: no such file or directory
E0311 13:01:43.085873 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: no such file or directory
E0311 13:01:43.106159 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: no such file or directory
E0311 13:01:43.146461 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: no such file or directory
E0311 13:01:43.226815 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: no such file or directory
E0311 13:01:43.387162 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: no such file or directory
E0311 13:01:43.711236 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: no such file or directory
E0311 13:01:44.351585 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: no such file or directory
E0311 13:01:45.637464 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: no such file or directory
E0311 13:01:48.197726 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: no such file or directory
E0311 13:01:53.320017 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: no such file or directory
E0311 13:02:03.560220 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: no such file or directory
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-413614 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-413614 --driver=docker  --container-runtime=crio: (32.44454207s)
--- PASS: TestErrorSpam/setup (32.45s)

                                                
                                    
x
+
TestErrorSpam/start (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-413614 --log_dir /tmp/nospam-413614 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-413614 --log_dir /tmp/nospam-413614 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-413614 --log_dir /tmp/nospam-413614 start --dry-run
--- PASS: TestErrorSpam/start (0.73s)

                                                
                                    
x
+
TestErrorSpam/status (1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-413614 --log_dir /tmp/nospam-413614 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-413614 --log_dir /tmp/nospam-413614 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-413614 --log_dir /tmp/nospam-413614 status
--- PASS: TestErrorSpam/status (1.00s)

                                                
                                    
x
+
TestErrorSpam/pause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-413614 --log_dir /tmp/nospam-413614 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-413614 --log_dir /tmp/nospam-413614 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-413614 --log_dir /tmp/nospam-413614 pause
--- PASS: TestErrorSpam/pause (1.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-413614 --log_dir /tmp/nospam-413614 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-413614 --log_dir /tmp/nospam-413614 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-413614 --log_dir /tmp/nospam-413614 unpause
--- PASS: TestErrorSpam/unpause (1.78s)

                                                
                                    
x
+
TestErrorSpam/stop (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-413614 --log_dir /tmp/nospam-413614 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-413614 --log_dir /tmp/nospam-413614 stop: (1.237073708s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-413614 --log_dir /tmp/nospam-413614 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-413614 --log_dir /tmp/nospam-413614 stop
--- PASS: TestErrorSpam/stop (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18350-1124504/.minikube/files/etc/test/nested/copy/1129906/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (50.62s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-112360 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0311 13:02:24.040447 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: no such file or directory
E0311 13:03:05.002649 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-112360 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (50.619991852s)
--- PASS: TestFunctional/serial/StartWithProxy (50.62s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.68s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-112360 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-112360 --alsologtostderr -v=8: (28.6821506s)
functional_test.go:659: soft start took 28.683216132s for "functional-112360" cluster.
--- PASS: TestFunctional/serial/SoftStart (28.68s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-112360 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-112360 cache add registry.k8s.io/pause:3.1: (1.244340388s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-112360 cache add registry.k8s.io/pause:3.3: (1.254326025s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-112360 cache add registry.k8s.io/pause:latest: (1.146212104s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-112360 /tmp/TestFunctionalserialCacheCmdcacheadd_local3262991099/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 cache add minikube-local-cache-test:functional-112360
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 cache delete minikube-local-cache-test:functional-112360
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-112360
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-112360 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (293.321544ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-112360 cache reload: (1.083545885s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 kubectl -- --context functional-112360 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-112360 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.3s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-112360 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-112360 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.304528472s)
functional_test.go:757: restart took 38.304655082s for "functional-112360" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.30s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-112360 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 logs
E0311 13:04:26.923809 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-112360 logs: (1.737426913s)
--- PASS: TestFunctional/serial/LogsCmd (1.74s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 logs --file /tmp/TestFunctionalserialLogsFileCmd1558969613/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-112360 logs --file /tmp/TestFunctionalserialLogsFileCmd1558969613/001/logs.txt: (1.816751414s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.82s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.35s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-112360 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-112360
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-112360: exit status 115 (461.411788ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30637 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-112360 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.35s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-112360 config get cpus: exit status 14 (94.60429ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-112360 config get cpus: exit status 14 (83.823449ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-112360 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-112360 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1155486: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.82s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-112360 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-112360 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (200.991809ms)

                                                
                                                
-- stdout --
	* [functional-112360] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18350-1124504/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-1124504/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:05:08.086492 1155235 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:05:08.086683 1155235 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:05:08.086756 1155235 out.go:304] Setting ErrFile to fd 2...
	I0311 13:05:08.086777 1155235 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:05:08.087054 1155235 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-1124504/.minikube/bin
	I0311 13:05:08.087456 1155235 out.go:298] Setting JSON to false
	I0311 13:05:08.088453 1155235 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17255,"bootTime":1710145053,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0311 13:05:08.088558 1155235 start.go:139] virtualization:  
	I0311 13:05:08.091213 1155235 out.go:177] * [functional-112360] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0311 13:05:08.094385 1155235 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 13:05:08.095300 1155235 notify.go:220] Checking for updates...
	I0311 13:05:08.098930 1155235 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 13:05:08.101564 1155235 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18350-1124504/kubeconfig
	I0311 13:05:08.103964 1155235 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-1124504/.minikube
	I0311 13:05:08.106411 1155235 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0311 13:05:08.108556 1155235 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 13:05:08.111278 1155235 config.go:182] Loaded profile config "functional-112360": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 13:05:08.111923 1155235 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 13:05:08.133756 1155235 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0311 13:05:08.133948 1155235 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 13:05:08.212425 1155235 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-03-11 13:05:08.202328104 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 13:05:08.212531 1155235 docker.go:295] overlay module found
	I0311 13:05:08.215054 1155235 out.go:177] * Using the docker driver based on existing profile
	I0311 13:05:08.217583 1155235 start.go:297] selected driver: docker
	I0311 13:05:08.217609 1155235 start.go:901] validating driver "docker" against &{Name:functional-112360 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-112360 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:05:08.217712 1155235 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 13:05:08.220495 1155235 out.go:177] 
	W0311 13:05:08.223203 1155235 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0311 13:05:08.225239 1155235 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-112360 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-112360 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-112360 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (282.765199ms)

                                                
                                                
-- stdout --
	* [functional-112360] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18350-1124504/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-1124504/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:05:07.818871 1155193 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:05:07.819076 1155193 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:05:07.819102 1155193 out.go:304] Setting ErrFile to fd 2...
	I0311 13:05:07.819120 1155193 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:05:07.820931 1155193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-1124504/.minikube/bin
	I0311 13:05:07.821558 1155193 out.go:298] Setting JSON to false
	I0311 13:05:07.822530 1155193 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17255,"bootTime":1710145053,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0311 13:05:07.822631 1155193 start.go:139] virtualization:  
	I0311 13:05:07.829245 1155193 out.go:177] * [functional-112360] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I0311 13:05:07.831577 1155193 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 13:05:07.833949 1155193 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 13:05:07.837438 1155193 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18350-1124504/kubeconfig
	I0311 13:05:07.833567 1155193 notify.go:220] Checking for updates...
	I0311 13:05:07.844832 1155193 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-1124504/.minikube
	I0311 13:05:07.846950 1155193 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0311 13:05:07.849204 1155193 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 13:05:07.851998 1155193 config.go:182] Loaded profile config "functional-112360": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 13:05:07.852535 1155193 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 13:05:07.894931 1155193 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0311 13:05:07.895210 1155193 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 13:05:08.001632 1155193 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-03-11 13:05:07.984201821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 13:05:08.001793 1155193 docker.go:295] overlay module found
	I0311 13:05:08.007545 1155193 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0311 13:05:08.009909 1155193 start.go:297] selected driver: docker
	I0311 13:05:08.009957 1155193 start.go:901] validating driver "docker" against &{Name:functional-112360 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-112360 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:05:08.010137 1155193 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 13:05:08.017245 1155193 out.go:177] 
	W0311 13:05:08.019975 1155193 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0311 13:05:08.022610 1155193 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-112360 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-112360 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-bp7ln" [69d7922d-7040-487e-85b9-17eb7a1c69dd] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-bp7ln" [69d7922d-7040-487e-85b9-17eb7a1c69dd] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.012382562s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:31553
functional_test.go:1671: http://192.168.49.2:31553: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-bp7ln

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31553
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.87s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [bbe0e552-d88d-4f00-b5b1-dc400b3e6463] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004654199s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-112360 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-112360 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-112360 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-112360 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [768d3b57-bc7a-4b37-82e1-64b37d8eb9ec] Pending
helpers_test.go:344: "sp-pod" [768d3b57-bc7a-4b37-82e1-64b37d8eb9ec] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [768d3b57-bc7a-4b37-82e1-64b37d8eb9ec] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003793916s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-112360 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-112360 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-112360 delete -f testdata/storage-provisioner/pod.yaml: (1.02661676s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-112360 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [baa0ac4d-e093-4b54-a02e-337448b00b64] Pending
helpers_test.go:344: "sp-pod" [baa0ac4d-e093-4b54-a02e-337448b00b64] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.005161981s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-112360 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.11s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh -n functional-112360 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 cp functional-112360:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3996324248/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh -n functional-112360 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh -n functional-112360 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1129906/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh "sudo cat /etc/test/nested/copy/1129906/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1129906.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh "sudo cat /etc/ssl/certs/1129906.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1129906.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh "sudo cat /usr/share/ca-certificates/1129906.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/11299062.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh "sudo cat /etc/ssl/certs/11299062.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/11299062.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh "sudo cat /usr/share/ca-certificates/11299062.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-112360 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-112360 ssh "sudo systemctl is-active docker": exit status 1 (306.420978ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-112360 ssh "sudo systemctl is-active containerd": exit status 1 (288.877835ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-112360 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-112360 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-112360 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1153210: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-112360 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-112360 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-112360 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [bcf5a53e-07bb-421e-a267-cadfab23a325] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [bcf5a53e-07bb-421e-a267-cadfab23a325] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004220642s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-112360 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.124.199 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-112360 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-112360 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-112360 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-bl4sq" [ac0183d6-7705-4ccf-82a3-8fa9a0ce45e2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-bl4sq" [ac0183d6-7705-4ccf-82a3-8fa9a0ce45e2] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004691492s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 service list -o json
functional_test.go:1490: Took "629.202886ms" to run "out/minikube-linux-arm64 -p functional-112360 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "424.45224ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "103.562414ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:31365
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "466.162946ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "94.368248ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-112360 /tmp/TestFunctionalparallelMountCmdany-port812428483/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1710162305653549387" to /tmp/TestFunctionalparallelMountCmdany-port812428483/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1710162305653549387" to /tmp/TestFunctionalparallelMountCmdany-port812428483/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1710162305653549387" to /tmp/TestFunctionalparallelMountCmdany-port812428483/001/test-1710162305653549387
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-112360 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (509.01124ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 11 13:05 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 11 13:05 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 11 13:05 test-1710162305653549387
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh cat /mount-9p/test-1710162305653549387
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-112360 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [bb2b3606-6b7f-454f-9872-bbf6baffc63b] Pending
helpers_test.go:344: "busybox-mount" [bb2b3606-6b7f-454f-9872-bbf6baffc63b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [bb2b3606-6b7f-454f-9872-bbf6baffc63b] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [bb2b3606-6b7f-454f-9872-bbf6baffc63b] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.006581616s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-112360 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-112360 /tmp/TestFunctionalparallelMountCmdany-port812428483/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:31365
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-112360 /tmp/TestFunctionalparallelMountCmdspecific-port399589782/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-112360 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (591.487246ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-112360 /tmp/TestFunctionalparallelMountCmdspecific-port399589782/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-112360 ssh "sudo umount -f /mount-9p": exit status 1 (396.852606ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-112360 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-112360 /tmp/TestFunctionalparallelMountCmdspecific-port399589782/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-112360 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4199946547/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-112360 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4199946547/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-112360 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4199946547/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-112360 ssh "findmnt -T" /mount1: (1.132114424s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-112360 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-112360 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4199946547/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-112360 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4199946547/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-112360 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4199946547/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-112360 version -o=json --components: (1.412285933s)
--- PASS: TestFunctional/parallel/Version/components (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-112360 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-112360
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240202-8f1494ea
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-112360 image ls --format short --alsologtostderr:
I0311 13:05:38.445444 1157742 out.go:291] Setting OutFile to fd 1 ...
I0311 13:05:38.445595 1157742 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 13:05:38.445601 1157742 out.go:304] Setting ErrFile to fd 2...
I0311 13:05:38.445606 1157742 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 13:05:38.445880 1157742 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-1124504/.minikube/bin
I0311 13:05:38.446586 1157742 config.go:182] Loaded profile config "functional-112360": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0311 13:05:38.446755 1157742 config.go:182] Loaded profile config "functional-112360": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0311 13:05:38.447308 1157742 cli_runner.go:164] Run: docker container inspect functional-112360 --format={{.State.Status}}
I0311 13:05:38.471588 1157742 ssh_runner.go:195] Run: systemctl --version
I0311 13:05:38.471648 1157742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-112360
I0311 13:05:38.494147 1157742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/functional-112360/id_rsa Username:docker}
I0311 13:05:38.589773 1157742 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-112360 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4740c1948d3fc | 60.9MB |
| docker.io/library/nginx                 | alpine             | be5e6f23a9904 | 45.4MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-proxy              | v1.28.4            | 3ca3ca488cf13 | 70MB   |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | 04b4eaa3d3db8 | 60.9MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/library/nginx                 | latest             | 760b7cbba31e1 | 196MB  |
| gcr.io/google-containers/addon-resizer  | functional-112360  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/kube-controller-manager | v1.28.4            | 9961cbceaf234 | 117MB  |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | 05c284c929889 | 59.3MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 04b4c447bb9d4 | 121MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-112360 image ls --format table --alsologtostderr:
I0311 13:05:39.117647 1157884 out.go:291] Setting OutFile to fd 1 ...
I0311 13:05:39.117758 1157884 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 13:05:39.117764 1157884 out.go:304] Setting ErrFile to fd 2...
I0311 13:05:39.117768 1157884 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 13:05:39.118035 1157884 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-1124504/.minikube/bin
I0311 13:05:39.118649 1157884 config.go:182] Loaded profile config "functional-112360": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0311 13:05:39.118771 1157884 config.go:182] Loaded profile config "functional-112360": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0311 13:05:39.119389 1157884 cli_runner.go:164] Run: docker container inspect functional-112360 --format={{.State.Status}}
I0311 13:05:39.145459 1157884 ssh_runner.go:195] Run: systemctl --version
I0311 13:05:39.145534 1157884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-112360
I0311 13:05:39.165490 1157884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/functional-112360/id_rsa Username:docker}
I0311 13:05:39.259758 1157884 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-112360 image ls --format json --alsologtostderr:
[{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-112360"],"size":"34114467"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"}
,{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:fde0f6062db0a3b3323d76a4cde031f0f891b5b79d12be642b7e5aad68f2836f"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"60940831"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/
k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb","registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"121119694"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":[
"registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"117252916"},{"id":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"59253556"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry
.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3","registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"182203183"},{"id":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":["registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"69992343"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baa
b0573f923052","docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"60867618"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f","repoDigests":["docker.io/library/nginx@sha256:34aa0a372d3220dc0448131f809c72d8085f79bdec8058ad6970fc034a395674","docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9"],"repoTags":["docker.io/library/nginx:alpine"],"size":"45393258"},{"id":"760b7cbba31e196288effd2af6924c42637ac5e0d67db4de6309f24518844676","repoDigests":["docker.io/library/nginx@sha256:c26ae7472d624ba1fafd2
96e73cecc4f93f853088e6a9c13c0d52f6ca5865107","docker.io/library/nginx@sha256:d5ec359034df4b326b8b5f0efa26dbd8742d166161b7edb37321b795c8fe5f48"],"repoTags":["docker.io/library/nginx:latest"],"size":"196117996"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-112360 image ls --format json --alsologtostderr:
I0311 13:05:38.814796 1157805 out.go:291] Setting OutFile to fd 1 ...
I0311 13:05:38.815006 1157805 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 13:05:38.815035 1157805 out.go:304] Setting ErrFile to fd 2...
I0311 13:05:38.815046 1157805 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 13:05:38.815364 1157805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-1124504/.minikube/bin
I0311 13:05:38.816147 1157805 config.go:182] Loaded profile config "functional-112360": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0311 13:05:38.816314 1157805 config.go:182] Loaded profile config "functional-112360": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0311 13:05:38.816968 1157805 cli_runner.go:164] Run: docker container inspect functional-112360 --format={{.State.Status}}
I0311 13:05:38.847200 1157805 ssh_runner.go:195] Run: systemctl --version
I0311 13:05:38.847255 1157805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-112360
I0311 13:05:38.866190 1157805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/functional-112360/id_rsa Username:docker}
I0311 13:05:38.958189 1157805 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-112360 image ls --format yaml --alsologtostderr:
- id: be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f
repoDigests:
- docker.io/library/nginx@sha256:34aa0a372d3220dc0448131f809c72d8085f79bdec8058ad6970fc034a395674
- docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9
repoTags:
- docker.io/library/nginx:alpine
size: "45393258"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
- registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "121119694"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "117252916"
- id: 760b7cbba31e196288effd2af6924c42637ac5e0d67db4de6309f24518844676
repoDigests:
- docker.io/library/nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107
- docker.io/library/nginx@sha256:d5ec359034df4b326b8b5f0efa26dbd8742d166161b7edb37321b795c8fe5f48
repoTags:
- docker.io/library/nginx:latest
size: "196117996"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-112360
size: "34114467"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:fde0f6062db0a3b3323d76a4cde031f0f891b5b79d12be642b7e5aad68f2836f
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "60940831"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "60867618"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "69992343"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "59253556"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-112360 image ls --format yaml --alsologtostderr:
I0311 13:05:38.464006 1157741 out.go:291] Setting OutFile to fd 1 ...
I0311 13:05:38.464533 1157741 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 13:05:38.464566 1157741 out.go:304] Setting ErrFile to fd 2...
I0311 13:05:38.464587 1157741 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 13:05:38.464870 1157741 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-1124504/.minikube/bin
I0311 13:05:38.465533 1157741 config.go:182] Loaded profile config "functional-112360": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0311 13:05:38.465702 1157741 config.go:182] Loaded profile config "functional-112360": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0311 13:05:38.466295 1157741 cli_runner.go:164] Run: docker container inspect functional-112360 --format={{.State.Status}}
I0311 13:05:38.503224 1157741 ssh_runner.go:195] Run: systemctl --version
I0311 13:05:38.503283 1157741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-112360
I0311 13:05:38.528111 1157741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/functional-112360/id_rsa Username:docker}
I0311 13:05:38.622645 1157741 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-112360 ssh pgrep buildkitd: exit status 1 (365.529579ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 image build -t localhost/my-image:functional-112360 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-112360 image build -t localhost/my-image:functional-112360 testdata/build --alsologtostderr: (2.211521614s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-112360 image build -t localhost/my-image:functional-112360 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f65cf5c0556
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-112360
--> 7044d53e5e4
Successfully tagged localhost/my-image:functional-112360
7044d53e5e466a7f528ad6088b8f553d59c0dae0256cc5fb62acd68c2135a6a4
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-112360 image build -t localhost/my-image:functional-112360 testdata/build --alsologtostderr:
I0311 13:05:39.100472 1157879 out.go:291] Setting OutFile to fd 1 ...
I0311 13:05:39.101297 1157879 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 13:05:39.101377 1157879 out.go:304] Setting ErrFile to fd 2...
I0311 13:05:39.101399 1157879 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 13:05:39.101763 1157879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-1124504/.minikube/bin
I0311 13:05:39.102544 1157879 config.go:182] Loaded profile config "functional-112360": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0311 13:05:39.109495 1157879 config.go:182] Loaded profile config "functional-112360": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0311 13:05:39.110713 1157879 cli_runner.go:164] Run: docker container inspect functional-112360 --format={{.State.Status}}
I0311 13:05:39.135888 1157879 ssh_runner.go:195] Run: systemctl --version
I0311 13:05:39.135941 1157879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-112360
I0311 13:05:39.155920 1157879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/functional-112360/id_rsa Username:docker}
I0311 13:05:39.250727 1157879 build_images.go:151] Building image from path: /tmp/build.2776705926.tar
I0311 13:05:39.250818 1157879 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0311 13:05:39.262830 1157879 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2776705926.tar
I0311 13:05:39.266942 1157879 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2776705926.tar: stat -c "%s %y" /var/lib/minikube/build/build.2776705926.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2776705926.tar': No such file or directory
I0311 13:05:39.266969 1157879 ssh_runner.go:362] scp /tmp/build.2776705926.tar --> /var/lib/minikube/build/build.2776705926.tar (3072 bytes)
I0311 13:05:39.315728 1157879 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2776705926
I0311 13:05:39.331122 1157879 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2776705926 -xf /var/lib/minikube/build/build.2776705926.tar
I0311 13:05:39.340548 1157879 crio.go:297] Building image: /var/lib/minikube/build/build.2776705926
I0311 13:05:39.340615 1157879 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-112360 /var/lib/minikube/build/build.2776705926 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0311 13:05:41.191886 1157879 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-112360 /var/lib/minikube/build/build.2776705926 --cgroup-manager=cgroupfs: (1.851248663s)
I0311 13:05:41.191969 1157879 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2776705926
I0311 13:05:41.201054 1157879 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2776705926.tar
I0311 13:05:41.209876 1157879 build_images.go:207] Built localhost/my-image:functional-112360 from /tmp/build.2776705926.tar
I0311 13:05:41.209916 1157879 build_images.go:123] succeeded building to: functional-112360
I0311 13:05:41.209922 1157879 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
2024/03/11 13:05:19 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.347434258s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-112360
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 image load --daemon gcr.io/google-containers/addon-resizer:functional-112360 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-112360 image load --daemon gcr.io/google-containers/addon-resizer:functional-112360 --alsologtostderr: (4.806016916s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 image load --daemon gcr.io/google-containers/addon-resizer:functional-112360 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-112360 image load --daemon gcr.io/google-containers/addon-resizer:functional-112360 --alsologtostderr: (2.755638156s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.632070777s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-112360
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 image load --daemon gcr.io/google-containers/addon-resizer:functional-112360 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-112360 image load --daemon gcr.io/google-containers/addon-resizer:functional-112360 --alsologtostderr: (3.63178239s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 image save gcr.io/google-containers/addon-resizer:functional-112360 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 image rm gcr.io/google-containers/addon-resizer:functional-112360 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-112360
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-112360 image save --daemon gcr.io/google-containers/addon-resizer:functional-112360 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-112360
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-112360
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-112360
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-112360
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StartCluster (157.55s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-992796 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0311 13:06:43.056111 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: no such file or directory
E0311 13:07:10.763979 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-992796 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m36.731371195s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/StartCluster (157.55s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeployApp (7.16s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-992796 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-992796 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-992796 -- rollout status deployment/busybox: (4.037274069s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-992796 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-992796 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-992796 -- exec busybox-5b5d89c9d6-4rsrl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-992796 -- exec busybox-5b5d89c9d6-slj6k -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-992796 -- exec busybox-5b5d89c9d6-x8wg7 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-992796 -- exec busybox-5b5d89c9d6-4rsrl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-992796 -- exec busybox-5b5d89c9d6-slj6k -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-992796 -- exec busybox-5b5d89c9d6-x8wg7 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-992796 -- exec busybox-5b5d89c9d6-4rsrl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-992796 -- exec busybox-5b5d89c9d6-slj6k -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-992796 -- exec busybox-5b5d89c9d6-x8wg7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMutliControlPlane/serial/DeployApp (7.16s)

                                                
                                    
x
+
TestMutliControlPlane/serial/PingHostFromPods (1.73s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-992796 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-992796 -- exec busybox-5b5d89c9d6-4rsrl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-992796 -- exec busybox-5b5d89c9d6-4rsrl -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-992796 -- exec busybox-5b5d89c9d6-slj6k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-992796 -- exec busybox-5b5d89c9d6-slj6k -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-992796 -- exec busybox-5b5d89c9d6-x8wg7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-992796 -- exec busybox-5b5d89c9d6-x8wg7 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMutliControlPlane/serial/PingHostFromPods (1.73s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddWorkerNode (27.23s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-992796 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-992796 -v=7 --alsologtostderr: (26.271059918s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/AddWorkerNode (27.23s)

                                                
                                    
x
+
TestMutliControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-992796 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMutliControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterClusterStart (0.8s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterClusterStart (0.80s)

                                                
                                    
x
+
TestMutliControlPlane/serial/CopyFile (19.85s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-992796 status --output json -v=7 --alsologtostderr: (1.033632694s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 cp testdata/cp-test.txt ha-992796:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 cp ha-992796:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile3414573276/001/cp-test_ha-992796.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 cp ha-992796:/home/docker/cp-test.txt ha-992796-m02:/home/docker/cp-test_ha-992796_ha-992796-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796-m02 "sudo cat /home/docker/cp-test_ha-992796_ha-992796-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 cp ha-992796:/home/docker/cp-test.txt ha-992796-m03:/home/docker/cp-test_ha-992796_ha-992796-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796-m03 "sudo cat /home/docker/cp-test_ha-992796_ha-992796-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 cp ha-992796:/home/docker/cp-test.txt ha-992796-m04:/home/docker/cp-test_ha-992796_ha-992796-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796-m04 "sudo cat /home/docker/cp-test_ha-992796_ha-992796-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 cp testdata/cp-test.txt ha-992796-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 cp ha-992796-m02:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile3414573276/001/cp-test_ha-992796-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 cp ha-992796-m02:/home/docker/cp-test.txt ha-992796:/home/docker/cp-test_ha-992796-m02_ha-992796.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796 "sudo cat /home/docker/cp-test_ha-992796-m02_ha-992796.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 cp ha-992796-m02:/home/docker/cp-test.txt ha-992796-m03:/home/docker/cp-test_ha-992796-m02_ha-992796-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796-m03 "sudo cat /home/docker/cp-test_ha-992796-m02_ha-992796-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 cp ha-992796-m02:/home/docker/cp-test.txt ha-992796-m04:/home/docker/cp-test_ha-992796-m02_ha-992796-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796-m04 "sudo cat /home/docker/cp-test_ha-992796-m02_ha-992796-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 cp testdata/cp-test.txt ha-992796-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 cp ha-992796-m03:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile3414573276/001/cp-test_ha-992796-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 cp ha-992796-m03:/home/docker/cp-test.txt ha-992796:/home/docker/cp-test_ha-992796-m03_ha-992796.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796 "sudo cat /home/docker/cp-test_ha-992796-m03_ha-992796.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 cp ha-992796-m03:/home/docker/cp-test.txt ha-992796-m02:/home/docker/cp-test_ha-992796-m03_ha-992796-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796-m02 "sudo cat /home/docker/cp-test_ha-992796-m03_ha-992796-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 cp ha-992796-m03:/home/docker/cp-test.txt ha-992796-m04:/home/docker/cp-test_ha-992796-m03_ha-992796-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796-m04 "sudo cat /home/docker/cp-test_ha-992796-m03_ha-992796-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 cp testdata/cp-test.txt ha-992796-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 cp ha-992796-m04:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile3414573276/001/cp-test_ha-992796-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 cp ha-992796-m04:/home/docker/cp-test.txt ha-992796:/home/docker/cp-test_ha-992796-m04_ha-992796.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796 "sudo cat /home/docker/cp-test_ha-992796-m04_ha-992796.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 cp ha-992796-m04:/home/docker/cp-test.txt ha-992796-m02:/home/docker/cp-test_ha-992796-m04_ha-992796-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796-m02 "sudo cat /home/docker/cp-test_ha-992796-m04_ha-992796-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 cp ha-992796-m04:/home/docker/cp-test.txt ha-992796-m03:/home/docker/cp-test_ha-992796-m04_ha-992796-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 ssh -n ha-992796-m03 "sudo cat /home/docker/cp-test_ha-992796-m04_ha-992796-m03.txt"
--- PASS: TestMutliControlPlane/serial/CopyFile (19.85s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopSecondaryNode (12.74s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-992796 node stop m02 -v=7 --alsologtostderr: (11.986147426s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-992796 status -v=7 --alsologtostderr: exit status 7 (757.546607ms)

                                                
                                                
-- stdout --
	ha-992796
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-992796-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-992796-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-992796-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:09:30.966902 1172804 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:09:30.967033 1172804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:09:30.967043 1172804 out.go:304] Setting ErrFile to fd 2...
	I0311 13:09:30.967049 1172804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:09:30.967267 1172804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-1124504/.minikube/bin
	I0311 13:09:30.967453 1172804 out.go:298] Setting JSON to false
	I0311 13:09:30.967485 1172804 mustload.go:65] Loading cluster: ha-992796
	I0311 13:09:30.967587 1172804 notify.go:220] Checking for updates...
	I0311 13:09:30.967943 1172804 config.go:182] Loaded profile config "ha-992796": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 13:09:30.967957 1172804 status.go:255] checking status of ha-992796 ...
	I0311 13:09:30.968431 1172804 cli_runner.go:164] Run: docker container inspect ha-992796 --format={{.State.Status}}
	I0311 13:09:30.989567 1172804 status.go:330] ha-992796 host status = "Running" (err=<nil>)
	I0311 13:09:30.989611 1172804 host.go:66] Checking if "ha-992796" exists ...
	I0311 13:09:30.989907 1172804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-992796
	I0311 13:09:31.011842 1172804 host.go:66] Checking if "ha-992796" exists ...
	I0311 13:09:31.012172 1172804 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 13:09:31.012229 1172804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796
	I0311 13:09:31.037223 1172804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/ha-992796/id_rsa Username:docker}
	I0311 13:09:31.131648 1172804 ssh_runner.go:195] Run: systemctl --version
	I0311 13:09:31.136622 1172804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 13:09:31.149824 1172804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 13:09:31.234305 1172804 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:76 SystemTime:2024-03-11 13:09:31.223996782 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 13:09:31.234966 1172804 kubeconfig.go:125] found "ha-992796" server: "https://192.168.49.254:8443"
	I0311 13:09:31.234997 1172804 api_server.go:166] Checking apiserver status ...
	I0311 13:09:31.235042 1172804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 13:09:31.246084 1172804 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1415/cgroup
	I0311 13:09:31.255975 1172804 api_server.go:182] apiserver freezer: "12:freezer:/docker/824aa912f5877e99c31410983233e425659e44ded7a4f077c57a2e3d284b2b29/crio/crio-0b9a3c9702f876b1c49cb7885eae0d3cb4bff5a7dedc2aace9d172316216026c"
	I0311 13:09:31.256045 1172804 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/824aa912f5877e99c31410983233e425659e44ded7a4f077c57a2e3d284b2b29/crio/crio-0b9a3c9702f876b1c49cb7885eae0d3cb4bff5a7dedc2aace9d172316216026c/freezer.state
	I0311 13:09:31.266242 1172804 api_server.go:204] freezer state: "THAWED"
	I0311 13:09:31.266268 1172804 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0311 13:09:31.276071 1172804 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0311 13:09:31.276103 1172804 status.go:422] ha-992796 apiserver status = Running (err=<nil>)
	I0311 13:09:31.276116 1172804 status.go:257] ha-992796 status: &{Name:ha-992796 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 13:09:31.276155 1172804 status.go:255] checking status of ha-992796-m02 ...
	I0311 13:09:31.276490 1172804 cli_runner.go:164] Run: docker container inspect ha-992796-m02 --format={{.State.Status}}
	I0311 13:09:31.292212 1172804 status.go:330] ha-992796-m02 host status = "Stopped" (err=<nil>)
	I0311 13:09:31.292237 1172804 status.go:343] host is not running, skipping remaining checks
	I0311 13:09:31.292245 1172804 status.go:257] ha-992796-m02 status: &{Name:ha-992796-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 13:09:31.292268 1172804 status.go:255] checking status of ha-992796-m03 ...
	I0311 13:09:31.292593 1172804 cli_runner.go:164] Run: docker container inspect ha-992796-m03 --format={{.State.Status}}
	I0311 13:09:31.312827 1172804 status.go:330] ha-992796-m03 host status = "Running" (err=<nil>)
	I0311 13:09:31.312856 1172804 host.go:66] Checking if "ha-992796-m03" exists ...
	I0311 13:09:31.313167 1172804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-992796-m03
	I0311 13:09:31.330787 1172804 host.go:66] Checking if "ha-992796-m03" exists ...
	I0311 13:09:31.331104 1172804 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 13:09:31.331154 1172804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796-m03
	I0311 13:09:31.349541 1172804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33957 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/ha-992796-m03/id_rsa Username:docker}
	I0311 13:09:31.442862 1172804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 13:09:31.455686 1172804 kubeconfig.go:125] found "ha-992796" server: "https://192.168.49.254:8443"
	I0311 13:09:31.455714 1172804 api_server.go:166] Checking apiserver status ...
	I0311 13:09:31.455764 1172804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 13:09:31.466758 1172804 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1328/cgroup
	I0311 13:09:31.477011 1172804 api_server.go:182] apiserver freezer: "12:freezer:/docker/494d288625c735eb185b28f1ea585adc4c103a1531604ace3fec7d740750388a/crio/crio-96a5074000f7cf7ae62da069222e3c3d8554b4d82d43586eb1b3dbe28900694b"
	I0311 13:09:31.477102 1172804 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/494d288625c735eb185b28f1ea585adc4c103a1531604ace3fec7d740750388a/crio/crio-96a5074000f7cf7ae62da069222e3c3d8554b4d82d43586eb1b3dbe28900694b/freezer.state
	I0311 13:09:31.487064 1172804 api_server.go:204] freezer state: "THAWED"
	I0311 13:09:31.487094 1172804 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0311 13:09:31.495652 1172804 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0311 13:09:31.495685 1172804 status.go:422] ha-992796-m03 apiserver status = Running (err=<nil>)
	I0311 13:09:31.495695 1172804 status.go:257] ha-992796-m03 status: &{Name:ha-992796-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 13:09:31.495717 1172804 status.go:255] checking status of ha-992796-m04 ...
	I0311 13:09:31.496028 1172804 cli_runner.go:164] Run: docker container inspect ha-992796-m04 --format={{.State.Status}}
	I0311 13:09:31.512398 1172804 status.go:330] ha-992796-m04 host status = "Running" (err=<nil>)
	I0311 13:09:31.512426 1172804 host.go:66] Checking if "ha-992796-m04" exists ...
	I0311 13:09:31.512731 1172804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-992796-m04
	I0311 13:09:31.529219 1172804 host.go:66] Checking if "ha-992796-m04" exists ...
	I0311 13:09:31.529574 1172804 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 13:09:31.529629 1172804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992796-m04
	I0311 13:09:31.546286 1172804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33962 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/ha-992796-m04/id_rsa Username:docker}
	I0311 13:09:31.642700 1172804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 13:09:31.655480 1172804 status.go:257] ha-992796-m04 status: &{Name:ha-992796-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMutliControlPlane/serial/StopSecondaryNode (12.74s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.58s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.58s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartSecondaryNode (21.19s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 node start m02 -v=7 --alsologtostderr
E0311 13:09:36.485710 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/functional-112360/client.crt: no such file or directory
E0311 13:09:36.490913 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/functional-112360/client.crt: no such file or directory
E0311 13:09:36.501123 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/functional-112360/client.crt: no such file or directory
E0311 13:09:36.521853 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/functional-112360/client.crt: no such file or directory
E0311 13:09:36.562068 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/functional-112360/client.crt: no such file or directory
E0311 13:09:36.642337 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/functional-112360/client.crt: no such file or directory
E0311 13:09:36.802650 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/functional-112360/client.crt: no such file or directory
E0311 13:09:37.123301 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/functional-112360/client.crt: no such file or directory
E0311 13:09:37.763498 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/functional-112360/client.crt: no such file or directory
E0311 13:09:39.044229 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/functional-112360/client.crt: no such file or directory
E0311 13:09:41.604653 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/functional-112360/client.crt: no such file or directory
E0311 13:09:46.725033 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/functional-112360/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-992796 node start m02 -v=7 --alsologtostderr: (19.750564342s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-992796 status -v=7 --alsologtostderr: (1.3287232s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMutliControlPlane/serial/RestartSecondaryNode (21.19s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (16.4s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0311 13:09:56.965263 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/functional-112360/client.crt: no such file or directory
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (16.403995915s)
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (16.40s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartClusterKeepsNodes (215.13s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-992796 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-992796 -v=7 --alsologtostderr
E0311 13:10:17.445877 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/functional-112360/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-992796 -v=7 --alsologtostderr: (37.02267755s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-992796 --wait=true -v=7 --alsologtostderr
E0311 13:10:58.406083 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/functional-112360/client.crt: no such file or directory
E0311 13:11:43.055636 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: no such file or directory
E0311 13:12:20.326302 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/functional-112360/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-992796 --wait=true -v=7 --alsologtostderr: (2m57.914604051s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-992796
--- PASS: TestMutliControlPlane/serial/RestartClusterKeepsNodes (215.13s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeleteSecondaryNode (12.04s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-992796 node delete m03 -v=7 --alsologtostderr: (11.01847536s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMutliControlPlane/serial/DeleteSecondaryNode (12.04s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopCluster (35.74s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-992796 stop -v=7 --alsologtostderr: (35.624790181s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-992796 status -v=7 --alsologtostderr: exit status 7 (110.416119ms)

                                                
                                                
-- stdout --
	ha-992796
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-992796-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-992796-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:14:33.245281 1186684 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:14:33.245540 1186684 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:14:33.245572 1186684 out.go:304] Setting ErrFile to fd 2...
	I0311 13:14:33.245594 1186684 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:14:33.245864 1186684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-1124504/.minikube/bin
	I0311 13:14:33.246087 1186684 out.go:298] Setting JSON to false
	I0311 13:14:33.246150 1186684 mustload.go:65] Loading cluster: ha-992796
	I0311 13:14:33.246226 1186684 notify.go:220] Checking for updates...
	I0311 13:14:33.246618 1186684 config.go:182] Loaded profile config "ha-992796": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 13:14:33.246658 1186684 status.go:255] checking status of ha-992796 ...
	I0311 13:14:33.247458 1186684 cli_runner.go:164] Run: docker container inspect ha-992796 --format={{.State.Status}}
	I0311 13:14:33.264179 1186684 status.go:330] ha-992796 host status = "Stopped" (err=<nil>)
	I0311 13:14:33.264201 1186684 status.go:343] host is not running, skipping remaining checks
	I0311 13:14:33.264209 1186684 status.go:257] ha-992796 status: &{Name:ha-992796 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 13:14:33.264239 1186684 status.go:255] checking status of ha-992796-m02 ...
	I0311 13:14:33.264546 1186684 cli_runner.go:164] Run: docker container inspect ha-992796-m02 --format={{.State.Status}}
	I0311 13:14:33.280046 1186684 status.go:330] ha-992796-m02 host status = "Stopped" (err=<nil>)
	I0311 13:14:33.280070 1186684 status.go:343] host is not running, skipping remaining checks
	I0311 13:14:33.280078 1186684 status.go:257] ha-992796-m02 status: &{Name:ha-992796-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 13:14:33.280114 1186684 status.go:255] checking status of ha-992796-m04 ...
	I0311 13:14:33.280411 1186684 cli_runner.go:164] Run: docker container inspect ha-992796-m04 --format={{.State.Status}}
	I0311 13:14:33.302873 1186684 status.go:330] ha-992796-m04 host status = "Stopped" (err=<nil>)
	I0311 13:14:33.302897 1186684 status.go:343] host is not running, skipping remaining checks
	I0311 13:14:33.302905 1186684 status.go:257] ha-992796-m04 status: &{Name:ha-992796-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMutliControlPlane/serial/StopCluster (35.74s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.57s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.57s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddSecondaryNode (64.94s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-992796 --control-plane -v=7 --alsologtostderr
E0311 13:16:43.055779 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-992796 --control-plane -v=7 --alsologtostderr: (1m3.784780102s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-992796 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-992796 status -v=7 --alsologtostderr: (1.153242051s)
--- PASS: TestMutliControlPlane/serial/AddSecondaryNode (64.94s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.76s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.76s)

                                                
                                    
x
+
TestJSONOutput/start/Command (49.04s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-486269 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0311 13:18:06.125510 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-486269 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (49.037204788s)
--- PASS: TestJSONOutput/start/Command (49.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-486269 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-486269 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.9s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-486269 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-486269 --output=json --user=testUser: (5.902011469s)
--- PASS: TestJSONOutput/stop/Command (5.90s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-962372 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-962372 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (83.605386ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"501be23b-7c95-4f31-a7bb-edc9b35ed8cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-962372] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4d23f13b-3771-4cd6-91bf-e8e045b15719","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18350"}}
	{"specversion":"1.0","id":"a5129a1f-6811-4e4a-b0c8-36fb2fc138fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4774d733-b13d-4dc6-9c7e-c6dff484383a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18350-1124504/kubeconfig"}}
	{"specversion":"1.0","id":"32adfe30-5b71-4eb6-b2f6-20c257889b03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-1124504/.minikube"}}
	{"specversion":"1.0","id":"73809503-0fc1-406b-b294-03f613c43047","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"53889da6-6089-4153-8fde-caa40249fd4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"45af0a97-7b1a-47b6-a685-cd50a894d3ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-962372" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-962372
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (43.95s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-759938 --network=
E0311 13:19:36.485493 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/functional-112360/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-759938 --network=: (41.732547848s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-759938" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-759938
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-759938: (2.201431364s)
--- PASS: TestKicCustomNetwork/create_custom_network (43.95s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.67s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-665926 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-665926 --network=bridge: (34.601614949s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-665926" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-665926
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-665926: (2.037054032s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.67s)

                                                
                                    
x
+
TestKicExistingNetwork (33.03s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-070533 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-070533 --network=existing-network: (30.915757128s)
helpers_test.go:175: Cleaning up "existing-network-070533" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-070533
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-070533: (1.958980365s)
--- PASS: TestKicExistingNetwork (33.03s)

                                                
                                    
x
+
TestKicCustomSubnet (34.47s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-382025 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-382025 --subnet=192.168.60.0/24: (32.413894494s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-382025 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-382025" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-382025
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-382025: (2.028470295s)
--- PASS: TestKicCustomSubnet (34.47s)

                                                
                                    
x
+
TestKicStaticIP (34.07s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-539449 --static-ip=192.168.200.200
E0311 13:21:43.055848 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-539449 --static-ip=192.168.200.200: (31.502664395s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-539449 ip
helpers_test.go:175: Cleaning up "static-ip-539449" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-539449
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-539449: (2.407980059s)
--- PASS: TestKicStaticIP (34.07s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (67.39s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-188792 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-188792 --driver=docker  --container-runtime=crio: (31.370738544s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-191623 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-191623 --driver=docker  --container-runtime=crio: (30.582126415s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-188792
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-191623
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-191623" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-191623
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-191623: (1.960444204s)
helpers_test.go:175: Cleaning up "first-188792" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-188792
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-188792: (2.258687367s)
--- PASS: TestMinikubeProfile (67.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-431536 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-431536 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.584924054s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-431536 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.1s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-445000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-445000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.098762031s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-445000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-431536 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-431536 --alsologtostderr -v=5: (1.643640729s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-445000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-445000
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-445000: (1.201961214s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.97s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-445000
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-445000: (7.970261948s)
--- PASS: TestMountStart/serial/RestartStopped (8.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-445000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (64.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-605542 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0311 13:24:36.486322 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/functional-112360/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-605542 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m3.627871s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (64.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-605542 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-605542 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-605542 -- rollout status deployment/busybox: (2.88143119s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-605542 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-605542 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-605542 -- exec busybox-5b5d89c9d6-8zmjs -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-605542 -- exec busybox-5b5d89c9d6-db5pl -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-605542 -- exec busybox-5b5d89c9d6-8zmjs -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-605542 -- exec busybox-5b5d89c9d6-db5pl -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-605542 -- exec busybox-5b5d89c9d6-8zmjs -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-605542 -- exec busybox-5b5d89c9d6-db5pl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.03s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-605542 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-605542 -- exec busybox-5b5d89c9d6-8zmjs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-605542 -- exec busybox-5b5d89c9d6-8zmjs -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-605542 -- exec busybox-5b5d89c9d6-db5pl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-605542 -- exec busybox-5b5d89c9d6-db5pl -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.07s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (19.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-605542 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-605542 -v 3 --alsologtostderr: (19.065621975s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (19.76s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-605542 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 cp testdata/cp-test.txt multinode-605542:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 ssh -n multinode-605542 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 cp multinode-605542:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1808615450/001/cp-test_multinode-605542.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 ssh -n multinode-605542 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 cp multinode-605542:/home/docker/cp-test.txt multinode-605542-m02:/home/docker/cp-test_multinode-605542_multinode-605542-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 ssh -n multinode-605542 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 ssh -n multinode-605542-m02 "sudo cat /home/docker/cp-test_multinode-605542_multinode-605542-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 cp multinode-605542:/home/docker/cp-test.txt multinode-605542-m03:/home/docker/cp-test_multinode-605542_multinode-605542-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 ssh -n multinode-605542 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 ssh -n multinode-605542-m03 "sudo cat /home/docker/cp-test_multinode-605542_multinode-605542-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 cp testdata/cp-test.txt multinode-605542-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 ssh -n multinode-605542-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 cp multinode-605542-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1808615450/001/cp-test_multinode-605542-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 ssh -n multinode-605542-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 cp multinode-605542-m02:/home/docker/cp-test.txt multinode-605542:/home/docker/cp-test_multinode-605542-m02_multinode-605542.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 ssh -n multinode-605542-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 ssh -n multinode-605542 "sudo cat /home/docker/cp-test_multinode-605542-m02_multinode-605542.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 cp multinode-605542-m02:/home/docker/cp-test.txt multinode-605542-m03:/home/docker/cp-test_multinode-605542-m02_multinode-605542-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 ssh -n multinode-605542-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 ssh -n multinode-605542-m03 "sudo cat /home/docker/cp-test_multinode-605542-m02_multinode-605542-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 cp testdata/cp-test.txt multinode-605542-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 ssh -n multinode-605542-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 cp multinode-605542-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1808615450/001/cp-test_multinode-605542-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 ssh -n multinode-605542-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 cp multinode-605542-m03:/home/docker/cp-test.txt multinode-605542:/home/docker/cp-test_multinode-605542-m03_multinode-605542.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 ssh -n multinode-605542-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 ssh -n multinode-605542 "sudo cat /home/docker/cp-test_multinode-605542-m03_multinode-605542.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 cp multinode-605542-m03:/home/docker/cp-test.txt multinode-605542-m02:/home/docker/cp-test_multinode-605542-m03_multinode-605542-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 ssh -n multinode-605542-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 ssh -n multinode-605542-m02 "sudo cat /home/docker/cp-test_multinode-605542-m03_multinode-605542-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-605542 node stop m03: (1.219165082s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-605542 status: exit status 7 (524.75736ms)

                                                
                                                
-- stdout --
	multinode-605542
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-605542-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-605542-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-605542 status --alsologtostderr: exit status 7 (526.712191ms)

                                                
                                                
-- stdout --
	multinode-605542
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-605542-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-605542-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:25:21.384803 1239790 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:25:21.385010 1239790 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:25:21.385039 1239790 out.go:304] Setting ErrFile to fd 2...
	I0311 13:25:21.385059 1239790 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:25:21.385375 1239790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-1124504/.minikube/bin
	I0311 13:25:21.385594 1239790 out.go:298] Setting JSON to false
	I0311 13:25:21.385665 1239790 mustload.go:65] Loading cluster: multinode-605542
	I0311 13:25:21.385734 1239790 notify.go:220] Checking for updates...
	I0311 13:25:21.386150 1239790 config.go:182] Loaded profile config "multinode-605542": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 13:25:21.386185 1239790 status.go:255] checking status of multinode-605542 ...
	I0311 13:25:21.386820 1239790 cli_runner.go:164] Run: docker container inspect multinode-605542 --format={{.State.Status}}
	I0311 13:25:21.404298 1239790 status.go:330] multinode-605542 host status = "Running" (err=<nil>)
	I0311 13:25:21.404319 1239790 host.go:66] Checking if "multinode-605542" exists ...
	I0311 13:25:21.404662 1239790 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-605542
	I0311 13:25:21.420995 1239790 host.go:66] Checking if "multinode-605542" exists ...
	I0311 13:25:21.421472 1239790 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 13:25:21.421540 1239790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-605542
	I0311 13:25:21.453831 1239790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34067 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/multinode-605542/id_rsa Username:docker}
	I0311 13:25:21.546816 1239790 ssh_runner.go:195] Run: systemctl --version
	I0311 13:25:21.551345 1239790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 13:25:21.562931 1239790 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 13:25:21.631341 1239790 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:66 SystemTime:2024-03-11 13:25:21.621716641 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 13:25:21.631939 1239790 kubeconfig.go:125] found "multinode-605542" server: "https://192.168.58.2:8443"
	I0311 13:25:21.631965 1239790 api_server.go:166] Checking apiserver status ...
	I0311 13:25:21.632016 1239790 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 13:25:21.643389 1239790 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1390/cgroup
	I0311 13:25:21.653335 1239790 api_server.go:182] apiserver freezer: "12:freezer:/docker/a2d9d863cb654aeb2debcb50550aa53fc0277bddb362136d40c8bfea699eaf25/crio/crio-8c73699aea63be3388e6b0be8356a06f68429b22c81f7a770e897b3cafa8528c"
	I0311 13:25:21.653497 1239790 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a2d9d863cb654aeb2debcb50550aa53fc0277bddb362136d40c8bfea699eaf25/crio/crio-8c73699aea63be3388e6b0be8356a06f68429b22c81f7a770e897b3cafa8528c/freezer.state
	I0311 13:25:21.662513 1239790 api_server.go:204] freezer state: "THAWED"
	I0311 13:25:21.662542 1239790 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0311 13:25:21.671173 1239790 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0311 13:25:21.671204 1239790 status.go:422] multinode-605542 apiserver status = Running (err=<nil>)
	I0311 13:25:21.671216 1239790 status.go:257] multinode-605542 status: &{Name:multinode-605542 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 13:25:21.671233 1239790 status.go:255] checking status of multinode-605542-m02 ...
	I0311 13:25:21.671554 1239790 cli_runner.go:164] Run: docker container inspect multinode-605542-m02 --format={{.State.Status}}
	I0311 13:25:21.691408 1239790 status.go:330] multinode-605542-m02 host status = "Running" (err=<nil>)
	I0311 13:25:21.691434 1239790 host.go:66] Checking if "multinode-605542-m02" exists ...
	I0311 13:25:21.691735 1239790 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-605542-m02
	I0311 13:25:21.707100 1239790 host.go:66] Checking if "multinode-605542-m02" exists ...
	I0311 13:25:21.707437 1239790 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 13:25:21.707484 1239790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-605542-m02
	I0311 13:25:21.724043 1239790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34072 SSHKeyPath:/home/jenkins/minikube-integration/18350-1124504/.minikube/machines/multinode-605542-m02/id_rsa Username:docker}
	I0311 13:25:21.814523 1239790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 13:25:21.825296 1239790 status.go:257] multinode-605542-m02 status: &{Name:multinode-605542-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0311 13:25:21.825331 1239790 status.go:255] checking status of multinode-605542-m03 ...
	I0311 13:25:21.825664 1239790 cli_runner.go:164] Run: docker container inspect multinode-605542-m03 --format={{.State.Status}}
	I0311 13:25:21.840642 1239790 status.go:330] multinode-605542-m03 host status = "Stopped" (err=<nil>)
	I0311 13:25:21.840666 1239790 status.go:343] host is not running, skipping remaining checks
	I0311 13:25:21.840674 1239790 status.go:257] multinode-605542-m03 status: &{Name:multinode-605542-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-605542 node start m03 -v=7 --alsologtostderr: (8.687786298s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (84.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-605542
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-605542
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-605542: (24.832698626s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-605542 --wait=true -v=8 --alsologtostderr
E0311 13:25:59.527114 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/functional-112360/client.crt: no such file or directory
E0311 13:26:43.056176 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-605542 --wait=true -v=8 --alsologtostderr: (59.941323774s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-605542
--- PASS: TestMultiNode/serial/RestartKeepsNodes (84.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-605542 node delete m03: (4.665659116s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.34s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-605542 stop: (23.740280457s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-605542 status: exit status 7 (94.261109ms)

                                                
                                                
-- stdout --
	multinode-605542
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-605542-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-605542 status --alsologtostderr: exit status 7 (96.018751ms)

                                                
                                                
-- stdout --
	multinode-605542
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-605542-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:27:25.448199 1246751 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:27:25.448395 1246751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:27:25.448426 1246751 out.go:304] Setting ErrFile to fd 2...
	I0311 13:27:25.448446 1246751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:27:25.448708 1246751 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-1124504/.minikube/bin
	I0311 13:27:25.448930 1246751 out.go:298] Setting JSON to false
	I0311 13:27:25.448999 1246751 mustload.go:65] Loading cluster: multinode-605542
	I0311 13:27:25.449028 1246751 notify.go:220] Checking for updates...
	I0311 13:27:25.449483 1246751 config.go:182] Loaded profile config "multinode-605542": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 13:27:25.449517 1246751 status.go:255] checking status of multinode-605542 ...
	I0311 13:27:25.450058 1246751 cli_runner.go:164] Run: docker container inspect multinode-605542 --format={{.State.Status}}
	I0311 13:27:25.471713 1246751 status.go:330] multinode-605542 host status = "Stopped" (err=<nil>)
	I0311 13:27:25.471735 1246751 status.go:343] host is not running, skipping remaining checks
	I0311 13:27:25.471744 1246751 status.go:257] multinode-605542 status: &{Name:multinode-605542 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 13:27:25.471772 1246751 status.go:255] checking status of multinode-605542-m02 ...
	I0311 13:27:25.472086 1246751 cli_runner.go:164] Run: docker container inspect multinode-605542-m02 --format={{.State.Status}}
	I0311 13:27:25.488438 1246751 status.go:330] multinode-605542-m02 host status = "Stopped" (err=<nil>)
	I0311 13:27:25.488459 1246751 status.go:343] host is not running, skipping remaining checks
	I0311 13:27:25.488466 1246751 status.go:257] multinode-605542-m02 status: &{Name:multinode-605542-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.93s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-605542 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-605542 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (47.895673494s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-605542 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.64s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-605542
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-605542-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-605542-m02 --driver=docker  --container-runtime=crio: exit status 14 (107.199063ms)

                                                
                                                
-- stdout --
	* [multinode-605542-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18350-1124504/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-1124504/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-605542-m02' is duplicated with machine name 'multinode-605542-m02' in profile 'multinode-605542'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-605542-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-605542-m03 --driver=docker  --container-runtime=crio: (31.680085916s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-605542
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-605542: exit status 80 (329.10749ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-605542 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-605542-m03 already exists in multinode-605542-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-605542-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-605542-m03: (1.953846245s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.13s)

                                                
                                    
x
+
TestPreload (119.11s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-273986 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0311 13:29:36.488558 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/functional-112360/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-273986 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m27.254327079s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-273986 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-273986 image pull gcr.io/k8s-minikube/busybox: (1.951947031s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-273986
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-273986: (5.785477351s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-273986 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-273986 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (21.366907232s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-273986 image list
helpers_test.go:175: Cleaning up "test-preload-273986" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-273986
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-273986: (2.439048233s)
--- PASS: TestPreload (119.11s)

                                                
                                    
x
+
TestScheduledStopUnix (106.29s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-706715 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-706715 --memory=2048 --driver=docker  --container-runtime=crio: (30.13824793s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-706715 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-706715 -n scheduled-stop-706715
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-706715 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-706715 --cancel-scheduled
E0311 13:31:43.055738 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-706715 -n scheduled-stop-706715
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-706715
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-706715 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-706715
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-706715: exit status 7 (78.437242ms)

                                                
                                                
-- stdout --
	scheduled-stop-706715
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-706715 -n scheduled-stop-706715
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-706715 -n scheduled-stop-706715: exit status 7 (78.511012ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-706715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-706715
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-706715: (4.529819067s)
--- PASS: TestScheduledStopUnix (106.29s)

                                                
                                    
x
+
TestInsufficientStorage (11.22s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-206015 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-206015 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.746494945s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7583a981-7c03-45c7-af60-df8e0e931af4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-206015] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"52bf9e99-e4f1-4f7e-af44-a5f7e12900a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18350"}}
	{"specversion":"1.0","id":"992dbcb8-cb52-4112-bb01-b8bde99b7ab7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"242c4f9a-3ce5-44f0-988f-69fb63d6d88f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18350-1124504/kubeconfig"}}
	{"specversion":"1.0","id":"bd57278a-66a8-4bd2-a402-618134131994","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-1124504/.minikube"}}
	{"specversion":"1.0","id":"b536af28-8f54-4137-b992-f87c2b8e09ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"3c19daf2-8451-4eec-88ed-ade1fa00c2b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d2314c84-13c9-4638-99e1-47610031e67c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"67ba1c75-3ad7-4b28-a75f-546b07080ae1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"6fb12121-2ee3-4a1f-a12e-91d6154ac1e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"273509be-efab-4d61-9dda-d3dcd9e82e97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"78a910f8-8a90-424e-8162-bb020c28f00c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-206015\" primary control-plane node in \"insufficient-storage-206015\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"affb19b2-2f6d-4f31-9850-b353ddcad90a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1708944392-18244 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"ab126238-953c-458e-bb75-8abe71c61768","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"773fc227-01cb-4967-b448-b5fbfc7321a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-206015 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-206015 --output=json --layout=cluster: exit status 7 (278.534648ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-206015","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-206015","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0311 13:32:46.780540 1263355 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-206015" does not appear in /home/jenkins/minikube-integration/18350-1124504/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-206015 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-206015 --output=json --layout=cluster: exit status 7 (280.839443ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-206015","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-206015","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0311 13:32:47.059781 1263408 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-206015" does not appear in /home/jenkins/minikube-integration/18350-1124504/kubeconfig
	E0311 13:32:47.070018 1263408 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/insufficient-storage-206015/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-206015" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-206015
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-206015: (1.910522563s)
--- PASS: TestInsufficientStorage (11.22s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (87.05s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.976687398 start -p running-upgrade-172384 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.976687398 start -p running-upgrade-172384 --memory=2200 --vm-driver=docker  --container-runtime=crio: (36.433211958s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-172384 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-172384 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (46.752261592s)
helpers_test.go:175: Cleaning up "running-upgrade-172384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-172384
E0311 13:39:36.486163 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/functional-112360/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-172384: (2.696741615s)
--- PASS: TestRunningBinaryUpgrade (87.05s)

                                                
                                    
x
+
TestKubernetesUpgrade (402.99s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-006721 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-006721 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m9.550842971s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-006721
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-006721: (1.426573019s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-006721 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-006721 status --format={{.Host}}: exit status 7 (242.259137ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-006721 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-006721 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m49.898047678s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-006721 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-006721 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-006721 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (105.60232ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-006721] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18350-1124504/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-1124504/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-006721
	    minikube start -p kubernetes-upgrade-006721 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0067212 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-006721 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-006721 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-006721 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.313052732s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-006721" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-006721
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-006721: (2.343580109s)
--- PASS: TestKubernetesUpgrade (402.99s)

                                                
                                    
x
+
TestMissingContainerUpgrade (154.13s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3811162143 start -p missing-upgrade-319440 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3811162143 start -p missing-upgrade-319440 --memory=2200 --driver=docker  --container-runtime=crio: (1m16.332262841s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-319440
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-319440: (10.389958228s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-319440
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-319440 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-319440 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m4.160738958s)
helpers_test.go:175: Cleaning up "missing-upgrade-319440" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-319440
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-319440: (2.060670112s)
--- PASS: TestMissingContainerUpgrade (154.13s)

                                                
                                    
x
+
TestPause/serial/Start (92.38s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-600715 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-600715 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m32.3786949s)
--- PASS: TestPause/serial/Start (92.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-163416 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-163416 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (143.467906ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-163416] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18350-1124504/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-1124504/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (46.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-163416 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-163416 --driver=docker  --container-runtime=crio: (46.021643016s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-163416 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (46.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-163416 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-163416 --no-kubernetes --driver=docker  --container-runtime=crio: (4.550442513s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-163416 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-163416 status -o json: exit status 2 (378.74965ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-163416","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-163416
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-163416: (2.055854248s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-163416 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-163416 --no-kubernetes --driver=docker  --container-runtime=crio: (9.155481866s)
--- PASS: TestNoKubernetes/serial/Start (9.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-163416 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-163416 "sudo systemctl is-active --quiet service kubelet": exit status 1 (282.026331ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-163416
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-163416: (1.230359385s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-163416 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-163416 --driver=docker  --container-runtime=crio: (7.025305279s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-163416 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-163416 "sudo systemctl is-active --quiet service kubelet": exit status 1 (286.080485ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (19.5s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-600715 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0311 13:34:36.486282 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/functional-112360/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-600715 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.474569659s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (19.50s)

                                                
                                    
x
+
TestPause/serial/Pause (1.18s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-600715 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-600715 --alsologtostderr -v=5: (1.182419264s)
--- PASS: TestPause/serial/Pause (1.18s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-600715 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-600715 --output=json --layout=cluster: exit status 2 (454.556832ms)

                                                
                                                
-- stdout --
	{"Name":"pause-600715","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-600715","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.45s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.89s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-600715 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.89s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.05s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-600715 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-600715 --alsologtostderr -v=5: (1.051068361s)
--- PASS: TestPause/serial/PauseAgain (1.05s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.26s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-600715 --alsologtostderr -v=5
E0311 13:34:46.126499 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-600715 --alsologtostderr -v=5: (3.259934498s)
--- PASS: TestPause/serial/DeletePaused (3.26s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-600715
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-600715: exit status 1 (22.042594ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-600715: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (89.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2961732775 start -p stopped-upgrade-239053 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0311 13:36:43.056238 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2961732775 start -p stopped-upgrade-239053 --memory=2200 --vm-driver=docker  --container-runtime=crio: (40.251153651s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2961732775 -p stopped-upgrade-239053 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2961732775 -p stopped-upgrade-239053 stop: (2.74126269s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-239053 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-239053 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (46.736412936s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (89.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-239053
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-239053: (1.311612145s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-147523 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-147523 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (193.516043ms)

                                                
                                                
-- stdout --
	* [false-147523] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18350
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18350-1124504/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-1124504/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:40:22.498724 1299591 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:40:22.498878 1299591 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:40:22.498892 1299591 out.go:304] Setting ErrFile to fd 2...
	I0311 13:40:22.498897 1299591 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:40:22.499147 1299591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18350-1124504/.minikube/bin
	I0311 13:40:22.499544 1299591 out.go:298] Setting JSON to false
	I0311 13:40:22.500475 1299591 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":19370,"bootTime":1710145053,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0311 13:40:22.500544 1299591 start.go:139] virtualization:  
	I0311 13:40:22.503406 1299591 out.go:177] * [false-147523] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0311 13:40:22.505887 1299591 out.go:177]   - MINIKUBE_LOCATION=18350
	I0311 13:40:22.508049 1299591 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 13:40:22.505990 1299591 notify.go:220] Checking for updates...
	I0311 13:40:22.511947 1299591 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18350-1124504/kubeconfig
	I0311 13:40:22.513886 1299591 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18350-1124504/.minikube
	I0311 13:40:22.515832 1299591 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0311 13:40:22.517824 1299591 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 13:40:22.520298 1299591 config.go:182] Loaded profile config "kubernetes-upgrade-006721": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0311 13:40:22.520413 1299591 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 13:40:22.545626 1299591 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0311 13:40:22.545744 1299591 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 13:40:22.618952 1299591 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-11 13:40:22.60721886 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 13:40:22.619060 1299591 docker.go:295] overlay module found
	I0311 13:40:22.621167 1299591 out.go:177] * Using the docker driver based on user configuration
	I0311 13:40:22.623101 1299591 start.go:297] selected driver: docker
	I0311 13:40:22.623117 1299591 start.go:901] validating driver "docker" against <nil>
	I0311 13:40:22.623132 1299591 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 13:40:22.625429 1299591 out.go:177] 
	W0311 13:40:22.627462 1299591 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0311 13:40:22.629542 1299591 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-147523 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-147523

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-147523

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-147523

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-147523

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-147523

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-147523

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-147523

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-147523

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-147523

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-147523

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-147523

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-147523" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-147523" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 11 Mar 2024 13:36:16 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-006721
contexts:
- context:
cluster: kubernetes-upgrade-006721
user: kubernetes-upgrade-006721
name: kubernetes-upgrade-006721
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-006721
user:
client-certificate: /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/kubernetes-upgrade-006721/client.crt
client-key: /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/kubernetes-upgrade-006721/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-147523

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147523"

                                                
                                                
----------------------- debugLogs end: false-147523 [took: 4.262686982s] --------------------------------
helpers_test.go:175: Cleaning up "false-147523" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-147523
--- PASS: TestNetworkPlugins/group/false (4.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (160.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-041218 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0311 13:42:39.527790 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/functional-112360/client.crt: no such file or directory
E0311 13:44:36.485521 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/functional-112360/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-041218 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m40.689933342s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (160.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-041218 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d5364eae-6f82-49d5-93bd-118c15fce588] Pending
helpers_test.go:344: "busybox" [d5364eae-6f82-49d5-93bd-118c15fce588] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d5364eae-6f82-49d5-93bd-118c15fce588] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004615584s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-041218 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-041218 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-041218 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-041218 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-041218 --alsologtostderr -v=3: (12.284175342s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-041218 -n old-k8s-version-041218
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-041218 -n old-k8s-version-041218: exit status 7 (278.362093ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-041218 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (33.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-041218 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-041218 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (33.299641004s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-041218 -n old-k8s-version-041218
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (33.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (78.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-326344 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-326344 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m18.514446461s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (78.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-g4kjg" [1f4a640d-1526-4cda-82b0-5c5efe7a068e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-cd95d586-g4kjg" [1f4a640d-1526-4cda-82b0-5c5efe7a068e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 36.003700618s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (36.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-g4kjg" [1f4a640d-1526-4cda-82b0-5c5efe7a068e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003671232s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-041218 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-041218 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-041218 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-041218 -n old-k8s-version-041218
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-041218 -n old-k8s-version-041218: exit status 2 (348.548818ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-041218 -n old-k8s-version-041218
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-041218 -n old-k8s-version-041218: exit status 2 (379.572077ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-041218 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-041218 -n old-k8s-version-041218
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-041218 -n old-k8s-version-041218
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (54.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-719445 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-719445 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (54.664904758s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (54.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-326344 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [07054d31-6dc0-4037-9eaa-bdf2652cb2dd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0311 13:46:43.056608 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: no such file or directory
helpers_test.go:344: "busybox" [07054d31-6dc0-4037-9eaa-bdf2652cb2dd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004890005s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-326344 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-326344 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-326344 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.276720893s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-326344 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-326344 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-326344 --alsologtostderr -v=3: (12.204180442s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-326344 -n no-preload-326344
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-326344 -n no-preload-326344: exit status 7 (98.081799ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-326344 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (279.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-326344 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-326344 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (4m39.408389329s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-326344 -n no-preload-326344
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (279.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-719445 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [129d89cc-dfd3-4ab5-824a-3f8b629b2ef0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [129d89cc-dfd3-4ab5-824a-3f8b629b2ef0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.005265667s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-719445 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-719445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-719445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.67702386s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-719445 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-719445 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-719445 --alsologtostderr -v=3: (12.677066366s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-719445 -n embed-certs-719445
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-719445 -n embed-certs-719445: exit status 7 (81.955334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-719445 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (267.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-719445 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0311 13:49:36.486440 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/functional-112360/client.crt: no such file or directory
E0311 13:49:49.416575 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/old-k8s-version-041218/client.crt: no such file or directory
E0311 13:49:49.421854 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/old-k8s-version-041218/client.crt: no such file or directory
E0311 13:49:49.432174 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/old-k8s-version-041218/client.crt: no such file or directory
E0311 13:49:49.452485 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/old-k8s-version-041218/client.crt: no such file or directory
E0311 13:49:49.492783 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/old-k8s-version-041218/client.crt: no such file or directory
E0311 13:49:49.573185 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/old-k8s-version-041218/client.crt: no such file or directory
E0311 13:49:49.733639 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/old-k8s-version-041218/client.crt: no such file or directory
E0311 13:49:50.053950 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/old-k8s-version-041218/client.crt: no such file or directory
E0311 13:49:50.694567 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/old-k8s-version-041218/client.crt: no such file or directory
E0311 13:49:51.975059 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/old-k8s-version-041218/client.crt: no such file or directory
E0311 13:49:54.535931 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/old-k8s-version-041218/client.crt: no such file or directory
E0311 13:49:59.656157 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/old-k8s-version-041218/client.crt: no such file or directory
E0311 13:50:09.896685 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/old-k8s-version-041218/client.crt: no such file or directory
E0311 13:50:30.377255 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/old-k8s-version-041218/client.crt: no such file or directory
E0311 13:51:11.337736 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/old-k8s-version-041218/client.crt: no such file or directory
E0311 13:51:26.126732 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: no such file or directory
E0311 13:51:43.056522 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/addons-127043/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-719445 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (4m27.002869755s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-719445 -n embed-certs-719445
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (267.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9zjbt" [9ae2485c-b405-44cb-9cef-fb370afe85b4] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004420828s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9zjbt" [9ae2485c-b405-44cb-9cef-fb370afe85b4] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003486204s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-326344 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-326344 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-326344 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-326344 -n no-preload-326344
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-326344 -n no-preload-326344: exit status 2 (329.985312ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-326344 -n no-preload-326344
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-326344 -n no-preload-326344: exit status 2 (345.801286ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-326344 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-326344 -n no-preload-326344
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-326344 -n no-preload-326344
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-400620 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-400620 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (53.00117002s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-v7f69" [2bb7bec4-6c1a-4a2f-a9b6-baf7c7dba7b8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003767965s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-v7f69" [2bb7bec4-6c1a-4a2f-a9b6-baf7c7dba7b8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00376504s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-719445 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-719445 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-719445 --alsologtostderr -v=1
E0311 13:52:33.258483 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/old-k8s-version-041218/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-719445 --alsologtostderr -v=1: (1.150927896s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-719445 -n embed-certs-719445
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-719445 -n embed-certs-719445: exit status 2 (460.929273ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-719445 -n embed-certs-719445
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-719445 -n embed-certs-719445: exit status 2 (454.105861ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-719445 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-719445 -n embed-certs-719445
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-719445 -n embed-certs-719445
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (51.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-202634 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-202634 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (51.370636118s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (51.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-400620 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dea8e4c4-9ceb-493a-80ff-35f25efea6a9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [dea8e4c4-9ceb-493a-80ff-35f25efea6a9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004778041s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-400620 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-400620 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-400620 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.259588635s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-400620 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-400620 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-400620 --alsologtostderr -v=3: (12.053099189s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-400620 -n default-k8s-diff-port-400620
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-400620 -n default-k8s-diff-port-400620: exit status 7 (83.140227ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-400620 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (271.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-400620 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-400620 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (4m30.701700397s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-400620 -n default-k8s-diff-port-400620
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (271.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-202634 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-202634 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.44124537s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-202634 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-202634 --alsologtostderr -v=3: (1.372684715s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-202634 -n newest-cni-202634
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-202634 -n newest-cni-202634: exit status 7 (142.693555ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-202634 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-202634 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-202634 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (17.604691625s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-202634 -n newest-cni-202634
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-202634 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-202634 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-202634 --alsologtostderr -v=1: (1.025106418s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-202634 -n newest-cni-202634
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-202634 -n newest-cni-202634: exit status 2 (449.189522ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-202634 -n newest-cni-202634
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-202634 -n newest-cni-202634: exit status 2 (501.383871ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-202634 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-202634 -n newest-cni-202634
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-202634 -n newest-cni-202634
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (53.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-147523 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0311 13:54:36.485640 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/functional-112360/client.crt: no such file or directory
E0311 13:54:49.416868 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/old-k8s-version-041218/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-147523 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (53.76862478s)
--- PASS: TestNetworkPlugins/group/auto/Start (53.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-147523 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-147523 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ckqjj" [72368be2-0c91-4b0c-b794-59c810ae0786] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-ckqjj" [72368be2-0c91-4b0c-b794-59c810ae0786] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003875468s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-147523 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-147523 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-147523 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (52.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-147523 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-147523 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (52.575566026s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (52.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-2ghvx" [eaf5d67f-7ad6-4ffe-ac55-4c10200e0dd2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004673864s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-147523 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-147523 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-m8srm" [edecb431-4598-4066-a51d-382505b5d8b9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-m8srm" [edecb431-4598-4066-a51d-382505b5d8b9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003479553s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-147523 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-147523 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-147523 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (75.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-147523 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0311 13:57:01.316930 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/no-preload-326344/client.crt: no such file or directory
E0311 13:57:21.797135 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/no-preload-326344/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-147523 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m15.16175992s)
--- PASS: TestNetworkPlugins/group/calico/Start (75.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zzp9r" [60fe837c-17a7-43e7-9c66-812268cbe29f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004284756s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zzp9r" [60fe837c-17a7-43e7-9c66-812268cbe29f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004706927s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-400620 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-400620 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-400620 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-400620 --alsologtostderr -v=1: (1.094011624s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-400620 -n default-k8s-diff-port-400620
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-400620 -n default-k8s-diff-port-400620: exit status 2 (362.490063ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-400620 -n default-k8s-diff-port-400620
E0311 13:58:02.758011 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/no-preload-326344/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-400620 -n default-k8s-diff-port-400620: exit status 2 (336.012215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-400620 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-400620 -n default-k8s-diff-port-400620
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-400620 -n default-k8s-diff-port-400620
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (66.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-147523 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-147523 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m6.534129088s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (66.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-4gxtb" [df7b48eb-6813-45b6-9f13-b625cb82c4df] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006816883s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-147523 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-147523 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-64wzm" [96b0a6d6-fe33-4b5a-aae1-680d18c1999c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-64wzm" [96b0a6d6-fe33-4b5a-aae1-680d18c1999c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.003610924s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-147523 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-147523 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-147523 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (48.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-147523 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-147523 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (48.897708718s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (48.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-147523 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-147523 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6j2cn" [478dec06-994c-44c2-9a93-87ac646abda2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0311 13:59:19.528924 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/functional-112360/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-6j2cn" [478dec06-994c-44c2-9a93-87ac646abda2] Running
E0311 13:59:24.678491 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/no-preload-326344/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.003328122s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-147523 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-147523 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-147523 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-147523 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-147523 replace --force -f testdata/netcat-deployment.yaml
E0311 13:59:49.417426 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/old-k8s-version-041218/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qptsb" [48e1732c-b292-4529-8e3b-af5d883f838c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-qptsb" [48e1732c-b292-4529-8e3b-af5d883f838c] Running
E0311 13:59:55.895393 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/auto-147523/client.crt: no such file or directory
E0311 13:59:56.535800 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/auto-147523/client.crt: no such file or directory
E0311 13:59:57.816329 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/auto-147523/client.crt: no such file or directory
E0311 14:00:00.401470 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/auto-147523/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.220698895s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (74.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-147523 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0311 13:59:55.257785 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/auto-147523/client.crt: no such file or directory
E0311 13:59:55.263349 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/auto-147523/client.crt: no such file or directory
E0311 13:59:55.273578 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/auto-147523/client.crt: no such file or directory
E0311 13:59:55.293843 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/auto-147523/client.crt: no such file or directory
E0311 13:59:55.334104 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/auto-147523/client.crt: no such file or directory
E0311 13:59:55.414570 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/auto-147523/client.crt: no such file or directory
E0311 13:59:55.575048 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/auto-147523/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-147523 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m14.497767397s)
--- PASS: TestNetworkPlugins/group/flannel/Start (74.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-147523 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-147523 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-147523 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (84.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-147523 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0311 14:00:36.242569 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/auto-147523/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-147523 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m24.713326046s)
--- PASS: TestNetworkPlugins/group/bridge/Start (84.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-zqvgn" [2d0a3666-d587-4b2a-9632-bcfba3775b3c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004181306s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-147523 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-147523 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qgqmz" [22cffb1a-634d-443f-a1ce-acdc5eca7fd5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0311 14:01:17.203570 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/auto-147523/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-qgqmz" [22cffb1a-634d-443f-a1ce-acdc5eca7fd5] Running
E0311 14:01:20.789090 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/kindnet-147523/client.crt: no such file or directory
E0311 14:01:20.794356 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/kindnet-147523/client.crt: no such file or directory
E0311 14:01:20.804618 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/kindnet-147523/client.crt: no such file or directory
E0311 14:01:20.824869 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/kindnet-147523/client.crt: no such file or directory
E0311 14:01:20.865397 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/kindnet-147523/client.crt: no such file or directory
E0311 14:01:20.945765 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/kindnet-147523/client.crt: no such file or directory
E0311 14:01:21.106873 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/kindnet-147523/client.crt: no such file or directory
E0311 14:01:21.427142 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/kindnet-147523/client.crt: no such file or directory
E0311 14:01:22.068023 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/kindnet-147523/client.crt: no such file or directory
E0311 14:01:23.349071 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/kindnet-147523/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003562807s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-147523 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-147523 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-147523 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-147523 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-147523 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wdf24" [d3539c8a-ce6a-4ede-aded-25e545d7d167] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-wdf24" [d3539c8a-ce6a-4ede-aded-25e545d7d167] Running
E0311 14:02:01.752246 1129906 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/kindnet-147523/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003595982s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-147523 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-147523 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-147523 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (32/335)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.59s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-083833 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-083833" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-083833
--- SKIP: TestDownloadOnlyKic (0.59s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-922896" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-922896
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-147523 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-147523

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-147523

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-147523

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-147523

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-147523

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-147523

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-147523

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-147523

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-147523

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-147523

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-147523

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-147523" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-147523" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 11 Mar 2024 13:36:16 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-006721
contexts:
- context:
cluster: kubernetes-upgrade-006721
user: kubernetes-upgrade-006721
name: kubernetes-upgrade-006721
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-006721
user:
client-certificate: /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/kubernetes-upgrade-006721/client.crt
client-key: /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/kubernetes-upgrade-006721/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-147523

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147523"

                                                
                                                
----------------------- debugLogs end: kubenet-147523 [took: 3.841212775s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-147523" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-147523
--- SKIP: TestNetworkPlugins/group/kubenet (4.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-147523 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-147523

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-147523

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-147523

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-147523

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-147523

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-147523

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-147523

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-147523

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-147523

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-147523

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-147523

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-147523" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-147523

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-147523

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-147523

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-147523

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-147523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-147523" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18350-1124504/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 11 Mar 2024 13:36:16 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-006721
contexts:
- context:
cluster: kubernetes-upgrade-006721
user: kubernetes-upgrade-006721
name: kubernetes-upgrade-006721
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-006721
user:
client-certificate: /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/kubernetes-upgrade-006721/client.crt
client-key: /home/jenkins/minikube-integration/18350-1124504/.minikube/profiles/kubernetes-upgrade-006721/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-147523

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-147523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147523"

                                                
                                                
----------------------- debugLogs end: cilium-147523 [took: 5.226491012s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-147523" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-147523
--- SKIP: TestNetworkPlugins/group/cilium (5.42s)

                                                
                                    
Copied to clipboard