Test Report: Docker_Linux_crio_arm64 18517

                    
                      225d0002a402609a65399cabc142d90eb2090f83:2024-03-27:33764
                    
                

Test fail (2/335)

Order failed test Duration
39 TestAddons/parallel/Ingress 166.25
182 TestMultiControlPlane/serial/RestartCluster 124.13
x
+
TestAddons/parallel/Ingress (166.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-408183 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-408183 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-408183 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e604eb6e-b9ed-4b7a-8cad-df8f3b18cdab] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e604eb6e-b9ed-4b7a-8cad-df8f3b18cdab] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004129573s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-408183 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-408183 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.677871742s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-408183 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-408183 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.073532505s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-408183 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-408183 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-408183 addons disable ingress --alsologtostderr -v=1: (7.851217398s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-408183
helpers_test.go:235: (dbg) docker inspect addons-408183:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24193609a0c470f138eea0369bc64102347a98d47509cfe9d0e6c9c01a9b7231",
	        "Created": "2024-03-27T18:59:14.782320179Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 568909,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-27T18:59:15.014534292Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f9b5358e8c18dbe49e632154cad75e0968b2e103f621caff2c3ed996f4155861",
	        "ResolvConfPath": "/var/lib/docker/containers/24193609a0c470f138eea0369bc64102347a98d47509cfe9d0e6c9c01a9b7231/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24193609a0c470f138eea0369bc64102347a98d47509cfe9d0e6c9c01a9b7231/hostname",
	        "HostsPath": "/var/lib/docker/containers/24193609a0c470f138eea0369bc64102347a98d47509cfe9d0e6c9c01a9b7231/hosts",
	        "LogPath": "/var/lib/docker/containers/24193609a0c470f138eea0369bc64102347a98d47509cfe9d0e6c9c01a9b7231/24193609a0c470f138eea0369bc64102347a98d47509cfe9d0e6c9c01a9b7231-json.log",
	        "Name": "/addons-408183",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-408183:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-408183",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f02d96de6f028e3038552c7e68ad33979a8d294a9d76f826c6fc5eed94df0a71-init/diff:/var/lib/docker/overlay2/035f6eff93a34b4eb6fc7c3d7c8227de09cbceaeca4dc81b78c663243a30a00f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f02d96de6f028e3038552c7e68ad33979a8d294a9d76f826c6fc5eed94df0a71/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f02d96de6f028e3038552c7e68ad33979a8d294a9d76f826c6fc5eed94df0a71/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f02d96de6f028e3038552c7e68ad33979a8d294a9d76f826c6fc5eed94df0a71/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-408183",
	                "Source": "/var/lib/docker/volumes/addons-408183/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-408183",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-408183",
	                "name.minikube.sigs.k8s.io": "addons-408183",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b99b3a96e0e474cfd4fd4544563b0af3eb067395dcfbdfcadb61ea6c10f8f000",
	            "SandboxKey": "/var/run/docker/netns/b99b3a96e0e4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33518"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33517"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33514"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33516"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33515"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-408183": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "365ab3cb1ac5b9aa91653ebc753f8bc35ec4d3365d1ad261cb898fd7750691c2",
	                    "EndpointID": "c4bcb5e258fdd734928abaf3417195b1b06aa682656812c12dc0fa4a84e29216",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-408183",
	                        "24193609a0c4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-408183 -n addons-408183
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-408183 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-408183 logs -n 25: (1.488771736s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p download-only-837463                                                                     | download-only-837463   | jenkins | v1.33.0-beta.0 | 27 Mar 24 18:58 UTC | 27 Mar 24 18:58 UTC |
	| delete  | -p download-only-541014                                                                     | download-only-541014   | jenkins | v1.33.0-beta.0 | 27 Mar 24 18:58 UTC | 27 Mar 24 18:58 UTC |
	| delete  | -p download-only-696066                                                                     | download-only-696066   | jenkins | v1.33.0-beta.0 | 27 Mar 24 18:58 UTC | 27 Mar 24 18:58 UTC |
	| start   | --download-only -p                                                                          | download-docker-842619 | jenkins | v1.33.0-beta.0 | 27 Mar 24 18:58 UTC |                     |
	|         | download-docker-842619                                                                      |                        |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |                |                     |                     |
	|         | --driver=docker                                                                             |                        |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |                |                     |                     |
	| delete  | -p download-docker-842619                                                                   | download-docker-842619 | jenkins | v1.33.0-beta.0 | 27 Mar 24 18:58 UTC | 27 Mar 24 18:58 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-289584   | jenkins | v1.33.0-beta.0 | 27 Mar 24 18:58 UTC |                     |
	|         | binary-mirror-289584                                                                        |                        |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |                |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |                |                     |                     |
	|         | http://127.0.0.1:39875                                                                      |                        |         |                |                     |                     |
	|         | --driver=docker                                                                             |                        |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |                |                     |                     |
	| delete  | -p binary-mirror-289584                                                                     | binary-mirror-289584   | jenkins | v1.33.0-beta.0 | 27 Mar 24 18:58 UTC | 27 Mar 24 18:58 UTC |
	| addons  | enable dashboard -p                                                                         | addons-408183          | jenkins | v1.33.0-beta.0 | 27 Mar 24 18:58 UTC |                     |
	|         | addons-408183                                                                               |                        |         |                |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-408183          | jenkins | v1.33.0-beta.0 | 27 Mar 24 18:58 UTC |                     |
	|         | addons-408183                                                                               |                        |         |                |                     |                     |
	| start   | -p addons-408183 --wait=true                                                                | addons-408183          | jenkins | v1.33.0-beta.0 | 27 Mar 24 18:58 UTC | 27 Mar 24 19:01 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |                |                     |                     |
	|         | --addons=registry                                                                           |                        |         |                |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |                |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |                |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |                |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |                |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |                |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |                |                     |                     |
	| ip      | addons-408183 ip                                                                            | addons-408183          | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:01 UTC | 27 Mar 24 19:01 UTC |
	| addons  | addons-408183 addons disable                                                                | addons-408183          | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:01 UTC | 27 Mar 24 19:01 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |                |                     |                     |
	|         | -v=1                                                                                        |                        |         |                |                     |                     |
	| addons  | addons-408183 addons                                                                        | addons-408183          | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:01 UTC | 27 Mar 24 19:01 UTC |
	|         | disable metrics-server                                                                      |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-408183          | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:02 UTC | 27 Mar 24 19:02 UTC |
	|         | addons-408183                                                                               |                        |         |                |                     |                     |
	| ssh     | addons-408183 ssh curl -s                                                                   | addons-408183          | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:02 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |                |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |                |                     |                     |
	| addons  | addons-408183 addons                                                                        | addons-408183          | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:02 UTC | 27 Mar 24 19:02 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | addons-408183 addons                                                                        | addons-408183          | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:02 UTC | 27 Mar 24 19:02 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-408183          | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:02 UTC | 27 Mar 24 19:02 UTC |
	|         | -p addons-408183                                                                            |                        |         |                |                     |                     |
	| ssh     | addons-408183 ssh cat                                                                       | addons-408183          | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:03 UTC | 27 Mar 24 19:03 UTC |
	|         | /opt/local-path-provisioner/pvc-202e28ad-5d38-4c60-aad7-1dea41135b4e_default_test-pvc/file1 |                        |         |                |                     |                     |
	| addons  | addons-408183 addons disable                                                                | addons-408183          | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:03 UTC | 27 Mar 24 19:03 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-408183          | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:03 UTC | 27 Mar 24 19:03 UTC |
	|         | addons-408183                                                                               |                        |         |                |                     |                     |
	| addons  | enable headlamp                                                                             | addons-408183          | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:03 UTC | 27 Mar 24 19:03 UTC |
	|         | -p addons-408183                                                                            |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| ip      | addons-408183 ip                                                                            | addons-408183          | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:04 UTC | 27 Mar 24 19:04 UTC |
	| addons  | addons-408183 addons disable                                                                | addons-408183          | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:04 UTC | 27 Mar 24 19:04 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |                |                     |                     |
	|         | -v=1                                                                                        |                        |         |                |                     |                     |
	| addons  | addons-408183 addons disable                                                                | addons-408183          | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:04 UTC | 27 Mar 24 19:04 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 18:58:50
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 18:58:50.470922  568461 out.go:291] Setting OutFile to fd 1 ...
	I0327 18:58:50.471177  568461 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 18:58:50.471206  568461 out.go:304] Setting ErrFile to fd 2...
	I0327 18:58:50.471224  568461 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 18:58:50.471500  568461 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-562206/.minikube/bin
	I0327 18:58:50.471989  568461 out.go:298] Setting JSON to false
	I0327 18:58:50.472895  568461 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9668,"bootTime":1711556262,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0327 18:58:50.472991  568461 start.go:139] virtualization:  
	I0327 18:58:50.476111  568461 out.go:177] * [addons-408183] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0327 18:58:50.479762  568461 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 18:58:50.482293  568461 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 18:58:50.479897  568461 notify.go:220] Checking for updates...
	I0327 18:58:50.486219  568461 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18517-562206/kubeconfig
	I0327 18:58:50.488606  568461 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18517-562206/.minikube
	I0327 18:58:50.490519  568461 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0327 18:58:50.492664  568461 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 18:58:50.494682  568461 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 18:58:50.513230  568461 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0327 18:58:50.513354  568461 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 18:58:50.575302  568461 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-27 18:58:50.566569352 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 18:58:50.575410  568461 docker.go:295] overlay module found
	I0327 18:58:50.579490  568461 out.go:177] * Using the docker driver based on user configuration
	I0327 18:58:50.582033  568461 start.go:297] selected driver: docker
	I0327 18:58:50.582051  568461 start.go:901] validating driver "docker" against <nil>
	I0327 18:58:50.582065  568461 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 18:58:50.582700  568461 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 18:58:50.634568  568461 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-27 18:58:50.624505159 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 18:58:50.634732  568461 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 18:58:50.634980  568461 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 18:58:50.636732  568461 out.go:177] * Using Docker driver with root privileges
	I0327 18:58:50.638215  568461 cni.go:84] Creating CNI manager for ""
	I0327 18:58:50.638245  568461 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0327 18:58:50.638260  568461 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0327 18:58:50.638333  568461 start.go:340] cluster config:
	{Name:addons-408183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-408183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0327 18:58:50.640873  568461 out.go:177] * Starting "addons-408183" primary control-plane node in "addons-408183" cluster
	I0327 18:58:50.643034  568461 cache.go:121] Beginning downloading kic base image for docker with crio
	I0327 18:58:50.644893  568461 out.go:177] * Pulling base image v0.0.43-beta.0 ...
	I0327 18:58:50.646447  568461 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0327 18:58:50.646493  568461 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18517-562206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-arm64.tar.lz4
	I0327 18:58:50.646505  568461 cache.go:56] Caching tarball of preloaded images
	I0327 18:58:50.646507  568461 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local docker daemon
	I0327 18:58:50.646603  568461 preload.go:173] Found /home/jenkins/minikube-integration/18517-562206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0327 18:58:50.646613  568461 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0327 18:58:50.646959  568461 profile.go:142] Saving config to /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/config.json ...
	I0327 18:58:50.646990  568461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/config.json: {Name:mk20543b61f1b6a2f47fba02e0c716a799cf7d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 18:58:50.660357  568461 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 to local cache
	I0327 18:58:50.660549  568461 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local cache directory
	I0327 18:58:50.660581  568461 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local cache directory, skipping pull
	I0327 18:58:50.660587  568461 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 exists in cache, skipping pull
	I0327 18:58:50.660595  568461 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 as a tarball
	I0327 18:58:50.660600  568461 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 from local cache
	I0327 18:59:07.083476  568461 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 from cached tarball
	I0327 18:59:07.083518  568461 cache.go:194] Successfully downloaded all kic artifacts
	I0327 18:59:07.083548  568461 start.go:360] acquireMachinesLock for addons-408183: {Name:mk90a3cb9f3ecd50654a7fb0133b9bf97daa4b6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 18:59:07.083681  568461 start.go:364] duration metric: took 107.684µs to acquireMachinesLock for "addons-408183"
	I0327 18:59:07.083711  568461 start.go:93] Provisioning new machine with config: &{Name:addons-408183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-408183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0327 18:59:07.083796  568461 start.go:125] createHost starting for "" (driver="docker")
	I0327 18:59:07.086804  568461 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0327 18:59:07.087043  568461 start.go:159] libmachine.API.Create for "addons-408183" (driver="docker")
	I0327 18:59:07.087077  568461 client.go:168] LocalClient.Create starting
	I0327 18:59:07.087182  568461 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca.pem
	I0327 18:59:07.593422  568461 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/cert.pem
	I0327 18:59:08.350742  568461 cli_runner.go:164] Run: docker network inspect addons-408183 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0327 18:59:08.365069  568461 cli_runner.go:211] docker network inspect addons-408183 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0327 18:59:08.365161  568461 network_create.go:281] running [docker network inspect addons-408183] to gather additional debugging logs...
	I0327 18:59:08.365181  568461 cli_runner.go:164] Run: docker network inspect addons-408183
	W0327 18:59:08.381751  568461 cli_runner.go:211] docker network inspect addons-408183 returned with exit code 1
	I0327 18:59:08.381786  568461 network_create.go:284] error running [docker network inspect addons-408183]: docker network inspect addons-408183: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-408183 not found
	I0327 18:59:08.381808  568461 network_create.go:286] output of [docker network inspect addons-408183]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-408183 not found
	
	** /stderr **
	I0327 18:59:08.381900  568461 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0327 18:59:08.396971  568461 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400048abb0}
	I0327 18:59:08.397013  568461 network_create.go:124] attempt to create docker network addons-408183 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0327 18:59:08.397075  568461 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-408183 addons-408183
	I0327 18:59:08.457614  568461 network_create.go:108] docker network addons-408183 192.168.49.0/24 created
	I0327 18:59:08.457659  568461 kic.go:121] calculated static IP "192.168.49.2" for the "addons-408183" container
	I0327 18:59:08.457778  568461 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0327 18:59:08.478693  568461 cli_runner.go:164] Run: docker volume create addons-408183 --label name.minikube.sigs.k8s.io=addons-408183 --label created_by.minikube.sigs.k8s.io=true
	I0327 18:59:08.502023  568461 oci.go:103] Successfully created a docker volume addons-408183
	I0327 18:59:08.502124  568461 cli_runner.go:164] Run: docker run --rm --name addons-408183-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-408183 --entrypoint /usr/bin/test -v addons-408183:/var gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 -d /var/lib
	I0327 18:59:10.453138  568461 cli_runner.go:217] Completed: docker run --rm --name addons-408183-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-408183 --entrypoint /usr/bin/test -v addons-408183:/var gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 -d /var/lib: (1.950955255s)
	I0327 18:59:10.453173  568461 oci.go:107] Successfully prepared a docker volume addons-408183
	I0327 18:59:10.453198  568461 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0327 18:59:10.453217  568461 kic.go:194] Starting extracting preloaded images to volume ...
	I0327 18:59:10.453332  568461 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18517-562206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-408183:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0327 18:59:14.720280  568461 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18517-562206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-408183:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.266904707s)
	I0327 18:59:14.720325  568461 kic.go:203] duration metric: took 4.267104813s to extract preloaded images to volume ...
	W0327 18:59:14.720452  568461 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0327 18:59:14.720583  568461 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0327 18:59:14.769731  568461 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-408183 --name addons-408183 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-408183 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-408183 --network addons-408183 --ip 192.168.49.2 --volume addons-408183:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8
	I0327 18:59:15.033454  568461 cli_runner.go:164] Run: docker container inspect addons-408183 --format={{.State.Running}}
	I0327 18:59:15.054013  568461 cli_runner.go:164] Run: docker container inspect addons-408183 --format={{.State.Status}}
	I0327 18:59:15.075150  568461 cli_runner.go:164] Run: docker exec addons-408183 stat /var/lib/dpkg/alternatives/iptables
	I0327 18:59:15.143641  568461 oci.go:144] the created container "addons-408183" has a running status.
	I0327 18:59:15.143668  568461 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18517-562206/.minikube/machines/addons-408183/id_rsa...
	I0327 18:59:15.802113  568461 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18517-562206/.minikube/machines/addons-408183/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0327 18:59:15.818481  568461 cli_runner.go:164] Run: docker container inspect addons-408183 --format={{.State.Status}}
	I0327 18:59:15.840173  568461 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0327 18:59:15.840191  568461 kic_runner.go:114] Args: [docker exec --privileged addons-408183 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0327 18:59:15.901139  568461 cli_runner.go:164] Run: docker container inspect addons-408183 --format={{.State.Status}}
	I0327 18:59:15.920092  568461 machine.go:94] provisionDockerMachine start ...
	I0327 18:59:15.920189  568461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-408183
	I0327 18:59:15.939079  568461 main.go:141] libmachine: Using SSH client type: native
	I0327 18:59:15.939343  568461 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33518 <nil> <nil>}
	I0327 18:59:15.939353  568461 main.go:141] libmachine: About to run SSH command:
	hostname
	I0327 18:59:16.070231  568461 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-408183
	
	I0327 18:59:16.070331  568461 ubuntu.go:169] provisioning hostname "addons-408183"
	I0327 18:59:16.070429  568461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-408183
	I0327 18:59:16.088310  568461 main.go:141] libmachine: Using SSH client type: native
	I0327 18:59:16.088555  568461 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33518 <nil> <nil>}
	I0327 18:59:16.088570  568461 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-408183 && echo "addons-408183" | sudo tee /etc/hostname
	I0327 18:59:16.230038  568461 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-408183
	
	I0327 18:59:16.230127  568461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-408183
	I0327 18:59:16.245950  568461 main.go:141] libmachine: Using SSH client type: native
	I0327 18:59:16.246206  568461 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33518 <nil> <nil>}
	I0327 18:59:16.246227  568461 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-408183' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-408183/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-408183' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0327 18:59:16.369971  568461 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 18:59:16.369997  568461 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18517-562206/.minikube CaCertPath:/home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18517-562206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18517-562206/.minikube}
	I0327 18:59:16.370019  568461 ubuntu.go:177] setting up certificates
	I0327 18:59:16.370029  568461 provision.go:84] configureAuth start
	I0327 18:59:16.370101  568461 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-408183
	I0327 18:59:16.386581  568461 provision.go:143] copyHostCerts
	I0327 18:59:16.386671  568461 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18517-562206/.minikube/ca.pem (1082 bytes)
	I0327 18:59:16.386787  568461 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18517-562206/.minikube/cert.pem (1123 bytes)
	I0327 18:59:16.386839  568461 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18517-562206/.minikube/key.pem (1679 bytes)
	I0327 18:59:16.386885  568461 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18517-562206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca-key.pem org=jenkins.addons-408183 san=[127.0.0.1 192.168.49.2 addons-408183 localhost minikube]
	I0327 18:59:16.987168  568461 provision.go:177] copyRemoteCerts
	I0327 18:59:16.987240  568461 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0327 18:59:16.987283  568461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-408183
	I0327 18:59:17.008646  568461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/addons-408183/id_rsa Username:docker}
	I0327 18:59:17.098449  568461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0327 18:59:17.121980  568461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0327 18:59:17.145455  568461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0327 18:59:17.169572  568461 provision.go:87] duration metric: took 799.528435ms to configureAuth
	I0327 18:59:17.169598  568461 ubuntu.go:193] setting minikube options for container-runtime
	I0327 18:59:17.169789  568461 config.go:182] Loaded profile config "addons-408183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 18:59:17.169889  568461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-408183
	I0327 18:59:17.184231  568461 main.go:141] libmachine: Using SSH client type: native
	I0327 18:59:17.184495  568461 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33518 <nil> <nil>}
	I0327 18:59:17.184511  568461 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0327 18:59:17.409642  568461 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0327 18:59:17.409667  568461 machine.go:97] duration metric: took 1.489555754s to provisionDockerMachine
	I0327 18:59:17.409679  568461 client.go:171] duration metric: took 10.32259263s to LocalClient.Create
	I0327 18:59:17.409693  568461 start.go:167] duration metric: took 10.322650049s to libmachine.API.Create "addons-408183"
	I0327 18:59:17.409700  568461 start.go:293] postStartSetup for "addons-408183" (driver="docker")
	I0327 18:59:17.409711  568461 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0327 18:59:17.409774  568461 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0327 18:59:17.409824  568461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-408183
	I0327 18:59:17.426841  568461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/addons-408183/id_rsa Username:docker}
	I0327 18:59:17.514803  568461 ssh_runner.go:195] Run: cat /etc/os-release
	I0327 18:59:17.517827  568461 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0327 18:59:17.517861  568461 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0327 18:59:17.517877  568461 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0327 18:59:17.517891  568461 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0327 18:59:17.517917  568461 filesync.go:126] Scanning /home/jenkins/minikube-integration/18517-562206/.minikube/addons for local assets ...
	I0327 18:59:17.517986  568461 filesync.go:126] Scanning /home/jenkins/minikube-integration/18517-562206/.minikube/files for local assets ...
	I0327 18:59:17.518015  568461 start.go:296] duration metric: took 108.307917ms for postStartSetup
	I0327 18:59:17.518322  568461 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-408183
	I0327 18:59:17.532765  568461 profile.go:142] Saving config to /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/config.json ...
	I0327 18:59:17.533057  568461 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0327 18:59:17.533109  568461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-408183
	I0327 18:59:17.548036  568461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/addons-408183/id_rsa Username:docker}
	I0327 18:59:17.634443  568461 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0327 18:59:17.638677  568461 start.go:128] duration metric: took 10.554863998s to createHost
	I0327 18:59:17.638700  568461 start.go:83] releasing machines lock for "addons-408183", held for 10.555007013s
	I0327 18:59:17.638772  568461 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-408183
	I0327 18:59:17.652947  568461 ssh_runner.go:195] Run: cat /version.json
	I0327 18:59:17.652997  568461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-408183
	I0327 18:59:17.653022  568461 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0327 18:59:17.653100  568461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-408183
	I0327 18:59:17.671892  568461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/addons-408183/id_rsa Username:docker}
	I0327 18:59:17.674203  568461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/addons-408183/id_rsa Username:docker}
	I0327 18:59:17.866331  568461 ssh_runner.go:195] Run: systemctl --version
	I0327 18:59:17.870441  568461 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0327 18:59:18.012690  568461 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0327 18:59:18.017546  568461 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0327 18:59:18.039396  568461 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0327 18:59:18.039503  568461 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0327 18:59:18.072513  568461 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0327 18:59:18.072542  568461 start.go:494] detecting cgroup driver to use...
	I0327 18:59:18.072597  568461 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0327 18:59:18.072648  568461 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0327 18:59:18.089547  568461 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 18:59:18.102169  568461 docker.go:217] disabling cri-docker service (if available) ...
	I0327 18:59:18.102264  568461 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0327 18:59:18.117630  568461 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0327 18:59:18.132548  568461 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0327 18:59:18.228216  568461 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0327 18:59:18.324625  568461 docker.go:233] disabling docker service ...
	I0327 18:59:18.324717  568461 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0327 18:59:18.345193  568461 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0327 18:59:18.357415  568461 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0327 18:59:18.451238  568461 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0327 18:59:18.555683  568461 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0327 18:59:18.568201  568461 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 18:59:18.585273  568461 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0327 18:59:18.585364  568461 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 18:59:18.595761  568461 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0327 18:59:18.595875  568461 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 18:59:18.606153  568461 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 18:59:18.616010  568461 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 18:59:18.625846  568461 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0327 18:59:18.635017  568461 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 18:59:18.644696  568461 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 18:59:18.660516  568461 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 18:59:18.670480  568461 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0327 18:59:18.679172  568461 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0327 18:59:18.687831  568461 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 18:59:18.771490  568461 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0327 18:59:18.886765  568461 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0327 18:59:18.886857  568461 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0327 18:59:18.890504  568461 start.go:562] Will wait 60s for crictl version
	I0327 18:59:18.890570  568461 ssh_runner.go:195] Run: which crictl
	I0327 18:59:18.893988  568461 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0327 18:59:18.932803  568461 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0327 18:59:18.932917  568461 ssh_runner.go:195] Run: crio --version
	I0327 18:59:18.973541  568461 ssh_runner.go:195] Run: crio --version
	I0327 18:59:19.014131  568461 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.24.6 ...
	I0327 18:59:19.016040  568461 cli_runner.go:164] Run: docker network inspect addons-408183 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0327 18:59:19.030207  568461 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0327 18:59:19.033643  568461 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 18:59:19.044214  568461 kubeadm.go:877] updating cluster {Name:addons-408183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-408183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerI
Ps:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0327 18:59:19.044353  568461 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0327 18:59:19.044426  568461 ssh_runner.go:195] Run: sudo crictl images --output json
	I0327 18:59:19.117536  568461 crio.go:514] all images are preloaded for cri-o runtime.
	I0327 18:59:19.117563  568461 crio.go:433] Images already preloaded, skipping extraction
	I0327 18:59:19.117619  568461 ssh_runner.go:195] Run: sudo crictl images --output json
	I0327 18:59:19.154487  568461 crio.go:514] all images are preloaded for cri-o runtime.
	I0327 18:59:19.154510  568461 cache_images.go:84] Images are preloaded, skipping loading
	I0327 18:59:19.154518  568461 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.29.3 crio true true} ...
	I0327 18:59:19.154615  568461 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-408183 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:addons-408183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0327 18:59:19.154709  568461 ssh_runner.go:195] Run: crio config
	I0327 18:59:19.206654  568461 cni.go:84] Creating CNI manager for ""
	I0327 18:59:19.206678  568461 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0327 18:59:19.206691  568461 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0327 18:59:19.206736  568461 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-408183 NodeName:addons-408183 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0327 18:59:19.206903  568461 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-408183"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0327 18:59:19.206983  568461 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0327 18:59:19.215577  568461 binaries.go:44] Found k8s binaries, skipping transfer
	I0327 18:59:19.215643  568461 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0327 18:59:19.224122  568461 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0327 18:59:19.241530  568461 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0327 18:59:19.259035  568461 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0327 18:59:19.276757  568461 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0327 18:59:19.280325  568461 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 18:59:19.291198  568461 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 18:59:19.371641  568461 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 18:59:19.387599  568461 certs.go:68] Setting up /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183 for IP: 192.168.49.2
	I0327 18:59:19.387672  568461 certs.go:194] generating shared ca certs ...
	I0327 18:59:19.387703  568461 certs.go:226] acquiring lock for ca certs: {Name:mk95afc777a0fafcf19d589f4cbc5a374d1fe472 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 18:59:19.387873  568461 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18517-562206/.minikube/ca.key
	I0327 18:59:19.606729  568461 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18517-562206/.minikube/ca.crt ...
	I0327 18:59:19.606758  568461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-562206/.minikube/ca.crt: {Name:mkc1407df9921b1b1f79dc2cb9809d90fb2b1a78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 18:59:19.607412  568461 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18517-562206/.minikube/ca.key ...
	I0327 18:59:19.607430  568461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-562206/.minikube/ca.key: {Name:mk8e2d1b4ab6988b14afbd87fb83dc7d30709175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 18:59:19.607526  568461 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18517-562206/.minikube/proxy-client-ca.key
	I0327 18:59:20.210413  568461 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18517-562206/.minikube/proxy-client-ca.crt ...
	I0327 18:59:20.210493  568461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-562206/.minikube/proxy-client-ca.crt: {Name:mk13923cc7792bf064cc629fe0aad88c6c06115a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 18:59:20.211613  568461 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18517-562206/.minikube/proxy-client-ca.key ...
	I0327 18:59:20.211668  568461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-562206/.minikube/proxy-client-ca.key: {Name:mk616dcad90ed78e7a85b9be587bbf27c664fc81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 18:59:20.212610  568461 certs.go:256] generating profile certs ...
	I0327 18:59:20.212742  568461 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.key
	I0327 18:59:20.212778  568461 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt with IP's: []
	I0327 18:59:20.477598  568461 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt ...
	I0327 18:59:20.477630  568461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: {Name:mk3ad5c05a8e20893527878ba851766e85a7574b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 18:59:20.478567  568461 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.key ...
	I0327 18:59:20.478586  568461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.key: {Name:mk614ddd7c23880a46e4ceed3e4cf56bc346c50e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 18:59:20.479033  568461 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/apiserver.key.e33f5368
	I0327 18:59:20.479057  568461 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/apiserver.crt.e33f5368 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0327 18:59:20.798407  568461 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/apiserver.crt.e33f5368 ...
	I0327 18:59:20.798440  568461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/apiserver.crt.e33f5368: {Name:mk9c5037b8d88cc52837d37f9f1e527f065e8fa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 18:59:20.799090  568461 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/apiserver.key.e33f5368 ...
	I0327 18:59:20.799110  568461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/apiserver.key.e33f5368: {Name:mk81eb3cb60905e1c2ed8d18ce8524c87a0f9a1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 18:59:20.799536  568461 certs.go:381] copying /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/apiserver.crt.e33f5368 -> /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/apiserver.crt
	I0327 18:59:20.799636  568461 certs.go:385] copying /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/apiserver.key.e33f5368 -> /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/apiserver.key
	I0327 18:59:20.799695  568461 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/proxy-client.key
	I0327 18:59:20.799715  568461 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/proxy-client.crt with IP's: []
	I0327 18:59:21.179960  568461 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/proxy-client.crt ...
	I0327 18:59:21.179992  568461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/proxy-client.crt: {Name:mk0783405826e242f7d721c4b91a90eb1f29c2a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 18:59:21.180183  568461 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/proxy-client.key ...
	I0327 18:59:21.180199  568461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/proxy-client.key: {Name:mk31297d14a4152e5bdb4a089ee4466fed95ce56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 18:59:21.180394  568461 certs.go:484] found cert: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca-key.pem (1679 bytes)
	I0327 18:59:21.180442  568461 certs.go:484] found cert: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca.pem (1082 bytes)
	I0327 18:59:21.180469  568461 certs.go:484] found cert: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/cert.pem (1123 bytes)
	I0327 18:59:21.180496  568461 certs.go:484] found cert: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/key.pem (1679 bytes)
	I0327 18:59:21.181087  568461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0327 18:59:21.205629  568461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0327 18:59:21.229115  568461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0327 18:59:21.252811  568461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0327 18:59:21.276775  568461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0327 18:59:21.299942  568461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0327 18:59:21.323087  568461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0327 18:59:21.346379  568461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0327 18:59:21.369872  568461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0327 18:59:21.393138  568461 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0327 18:59:21.410366  568461 ssh_runner.go:195] Run: openssl version
	I0327 18:59:21.415546  568461 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0327 18:59:21.425012  568461 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0327 18:59:21.428476  568461 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 18:59 /usr/share/ca-certificates/minikubeCA.pem
	I0327 18:59:21.428585  568461 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0327 18:59:21.435461  568461 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0327 18:59:21.444592  568461 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0327 18:59:21.447659  568461 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0327 18:59:21.447707  568461 kubeadm.go:391] StartCluster: {Name:addons-408183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-408183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 18:59:21.447790  568461 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0327 18:59:21.447859  568461 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0327 18:59:21.488061  568461 cri.go:89] found id: ""
	I0327 18:59:21.488191  568461 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0327 18:59:21.496917  568461 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 18:59:21.505767  568461 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0327 18:59:21.505860  568461 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 18:59:21.514773  568461 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0327 18:59:21.514795  568461 kubeadm.go:156] found existing configuration files:
	
	I0327 18:59:21.514847  568461 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0327 18:59:21.523608  568461 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0327 18:59:21.523673  568461 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 18:59:21.531780  568461 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0327 18:59:21.540162  568461 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0327 18:59:21.540225  568461 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 18:59:21.548182  568461 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0327 18:59:21.556878  568461 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0327 18:59:21.556988  568461 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 18:59:21.565205  568461 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0327 18:59:21.574022  568461 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0327 18:59:21.574090  568461 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 18:59:21.582362  568461 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0327 18:59:21.687305  568461 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1056-aws\n", err: exit status 1
	I0327 18:59:21.755842  568461 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0327 18:59:37.008125  568461 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0327 18:59:37.008185  568461 kubeadm.go:309] [preflight] Running pre-flight checks
	I0327 18:59:37.008269  568461 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0327 18:59:37.008324  568461 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1056-aws
	I0327 18:59:37.008357  568461 kubeadm.go:309] OS: Linux
	I0327 18:59:37.008403  568461 kubeadm.go:309] CGROUPS_CPU: enabled
	I0327 18:59:37.008449  568461 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0327 18:59:37.008494  568461 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0327 18:59:37.008541  568461 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0327 18:59:37.008587  568461 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0327 18:59:37.008637  568461 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0327 18:59:37.008681  568461 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0327 18:59:37.008726  568461 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0327 18:59:37.008773  568461 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0327 18:59:37.008842  568461 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0327 18:59:37.008933  568461 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0327 18:59:37.009021  568461 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0327 18:59:37.009081  568461 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0327 18:59:37.011708  568461 out.go:204]   - Generating certificates and keys ...
	I0327 18:59:37.011817  568461 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0327 18:59:37.011882  568461 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0327 18:59:37.011946  568461 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0327 18:59:37.012003  568461 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0327 18:59:37.012061  568461 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0327 18:59:37.012108  568461 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0327 18:59:37.012160  568461 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0327 18:59:37.012272  568461 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-408183 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0327 18:59:37.012323  568461 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0327 18:59:37.012433  568461 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-408183 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0327 18:59:37.012498  568461 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0327 18:59:37.012560  568461 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0327 18:59:37.012603  568461 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0327 18:59:37.012655  568461 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0327 18:59:37.012704  568461 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0327 18:59:37.012757  568461 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0327 18:59:37.012809  568461 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0327 18:59:37.012870  568461 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0327 18:59:37.012922  568461 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0327 18:59:37.013001  568461 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0327 18:59:37.013064  568461 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0327 18:59:37.014899  568461 out.go:204]   - Booting up control plane ...
	I0327 18:59:37.015148  568461 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0327 18:59:37.015272  568461 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0327 18:59:37.015390  568461 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0327 18:59:37.015559  568461 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0327 18:59:37.015661  568461 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0327 18:59:37.015704  568461 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0327 18:59:37.015864  568461 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0327 18:59:37.015944  568461 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.502254 seconds
	I0327 18:59:37.016053  568461 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0327 18:59:37.016183  568461 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0327 18:59:37.016244  568461 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0327 18:59:37.016432  568461 kubeadm.go:309] [mark-control-plane] Marking the node addons-408183 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0327 18:59:37.016510  568461 kubeadm.go:309] [bootstrap-token] Using token: 0s02vd.fgzvdljmgl64dupo
	I0327 18:59:37.018884  568461 out.go:204]   - Configuring RBAC rules ...
	I0327 18:59:37.019031  568461 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0327 18:59:37.019121  568461 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0327 18:59:37.019270  568461 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0327 18:59:37.019403  568461 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0327 18:59:37.019522  568461 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0327 18:59:37.019636  568461 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0327 18:59:37.019757  568461 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0327 18:59:37.019802  568461 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0327 18:59:37.019852  568461 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0327 18:59:37.019856  568461 kubeadm.go:309] 
	I0327 18:59:37.019918  568461 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0327 18:59:37.019923  568461 kubeadm.go:309] 
	I0327 18:59:37.020003  568461 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0327 18:59:37.020008  568461 kubeadm.go:309] 
	I0327 18:59:37.020034  568461 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0327 18:59:37.020096  568461 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0327 18:59:37.020148  568461 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0327 18:59:37.020152  568461 kubeadm.go:309] 
	I0327 18:59:37.020208  568461 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0327 18:59:37.020212  568461 kubeadm.go:309] 
	I0327 18:59:37.020262  568461 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0327 18:59:37.020266  568461 kubeadm.go:309] 
	I0327 18:59:37.020321  568461 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0327 18:59:37.020398  568461 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0327 18:59:37.020469  568461 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0327 18:59:37.020476  568461 kubeadm.go:309] 
	I0327 18:59:37.020563  568461 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0327 18:59:37.020643  568461 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0327 18:59:37.020648  568461 kubeadm.go:309] 
	I0327 18:59:37.020734  568461 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0s02vd.fgzvdljmgl64dupo \
	I0327 18:59:37.020842  568461 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4a65f18ab9e35af3e6d901fd8512fa6205499b15fb7e04134e540c395992054f \
	I0327 18:59:37.020865  568461 kubeadm.go:309] 	--control-plane 
	I0327 18:59:37.020869  568461 kubeadm.go:309] 
	I0327 18:59:37.020958  568461 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0327 18:59:37.020963  568461 kubeadm.go:309] 
	I0327 18:59:37.021048  568461 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0s02vd.fgzvdljmgl64dupo \
	I0327 18:59:37.021171  568461 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4a65f18ab9e35af3e6d901fd8512fa6205499b15fb7e04134e540c395992054f 
	I0327 18:59:37.021187  568461 cni.go:84] Creating CNI manager for ""
	I0327 18:59:37.021195  568461 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0327 18:59:37.023424  568461 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0327 18:59:37.025671  568461 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0327 18:59:37.036281  568461 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0327 18:59:37.036305  568461 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0327 18:59:37.094682  568461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0327 18:59:37.362486  568461 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0327 18:59:37.362578  568461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 18:59:37.362617  568461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-408183 minikube.k8s.io/updated_at=2024_03_27T18_59_37_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=475b39f6a1dc94a0c7060d2eec10d9b995edcd28 minikube.k8s.io/name=addons-408183 minikube.k8s.io/primary=true
	I0327 18:59:37.498306  568461 ops.go:34] apiserver oom_adj: -16
	I0327 18:59:37.498462  568461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 18:59:38.003778  568461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 18:59:38.499532  568461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 18:59:39.002012  568461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 18:59:39.499235  568461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 18:59:40.001761  568461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 18:59:40.499400  568461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 18:59:41.000436  568461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 18:59:41.498725  568461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 18:59:41.998805  568461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 18:59:42.498633  568461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 18:59:42.998539  568461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 18:59:43.499433  568461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 18:59:43.998707  568461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 18:59:44.498591  568461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 18:59:45.001481  568461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 18:59:45.499302  568461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 18:59:45.998913  568461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 18:59:46.498692  568461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 18:59:47.003763  568461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 18:59:47.499271  568461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 18:59:48.000721  568461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 18:59:48.499180  568461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 18:59:48.998527  568461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 18:59:49.498608  568461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 18:59:49.998655  568461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 18:59:50.499295  568461 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 18:59:50.597033  568461 kubeadm.go:1107] duration metric: took 13.234502433s to wait for elevateKubeSystemPrivileges
	W0327 18:59:50.597075  568461 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0327 18:59:50.597084  568461 kubeadm.go:393] duration metric: took 29.149380814s to StartCluster
	I0327 18:59:50.597100  568461 settings.go:142] acquiring lock: {Name:mkffcd59f6abeb2b3cc53bb555eb7fb5f175c67e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 18:59:50.597667  568461 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18517-562206/kubeconfig
	I0327 18:59:50.598098  568461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-562206/kubeconfig: {Name:mk1481518c17ad7c54533eeb54c75c7968328394 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 18:59:50.598296  568461 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0327 18:59:50.598334  568461 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0327 18:59:50.600544  568461 out.go:177] * Verifying Kubernetes components...
	I0327 18:59:50.598565  568461 config.go:182] Loaded profile config "addons-408183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 18:59:50.598575  568461 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0327 18:59:50.602256  568461 addons.go:69] Setting yakd=true in profile "addons-408183"
	I0327 18:59:50.602278  568461 addons.go:234] Setting addon yakd=true in "addons-408183"
	I0327 18:59:50.602310  568461 host.go:66] Checking if "addons-408183" exists ...
	I0327 18:59:50.602794  568461 cli_runner.go:164] Run: docker container inspect addons-408183 --format={{.State.Status}}
	I0327 18:59:50.602988  568461 addons.go:69] Setting ingress-dns=true in profile "addons-408183"
	I0327 18:59:50.603016  568461 addons.go:234] Setting addon ingress-dns=true in "addons-408183"
	I0327 18:59:50.603046  568461 host.go:66] Checking if "addons-408183" exists ...
	I0327 18:59:50.603415  568461 cli_runner.go:164] Run: docker container inspect addons-408183 --format={{.State.Status}}
	I0327 18:59:50.603906  568461 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 18:59:50.604087  568461 addons.go:69] Setting inspektor-gadget=true in profile "addons-408183"
	I0327 18:59:50.604108  568461 addons.go:234] Setting addon inspektor-gadget=true in "addons-408183"
	I0327 18:59:50.604133  568461 host.go:66] Checking if "addons-408183" exists ...
	I0327 18:59:50.604152  568461 addons.go:69] Setting cloud-spanner=true in profile "addons-408183"
	I0327 18:59:50.604185  568461 addons.go:234] Setting addon cloud-spanner=true in "addons-408183"
	I0327 18:59:50.604208  568461 host.go:66] Checking if "addons-408183" exists ...
	I0327 18:59:50.604494  568461 cli_runner.go:164] Run: docker container inspect addons-408183 --format={{.State.Status}}
	I0327 18:59:50.604574  568461 cli_runner.go:164] Run: docker container inspect addons-408183 --format={{.State.Status}}
	I0327 18:59:50.606720  568461 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-408183"
	I0327 18:59:50.606776  568461 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-408183"
	I0327 18:59:50.606802  568461 host.go:66] Checking if "addons-408183" exists ...
	I0327 18:59:50.607201  568461 cli_runner.go:164] Run: docker container inspect addons-408183 --format={{.State.Status}}
	I0327 18:59:50.607598  568461 addons.go:69] Setting metrics-server=true in profile "addons-408183"
	I0327 18:59:50.607633  568461 addons.go:234] Setting addon metrics-server=true in "addons-408183"
	I0327 18:59:50.607666  568461 host.go:66] Checking if "addons-408183" exists ...
	I0327 18:59:50.608064  568461 cli_runner.go:164] Run: docker container inspect addons-408183 --format={{.State.Status}}
	I0327 18:59:50.615246  568461 addons.go:69] Setting default-storageclass=true in profile "addons-408183"
	I0327 18:59:50.615293  568461 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-408183"
	I0327 18:59:50.615589  568461 cli_runner.go:164] Run: docker container inspect addons-408183 --format={{.State.Status}}
	I0327 18:59:50.626254  568461 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-408183"
	I0327 18:59:50.626294  568461 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-408183"
	I0327 18:59:50.626337  568461 host.go:66] Checking if "addons-408183" exists ...
	I0327 18:59:50.626781  568461 cli_runner.go:164] Run: docker container inspect addons-408183 --format={{.State.Status}}
	I0327 18:59:50.633247  568461 addons.go:69] Setting gcp-auth=true in profile "addons-408183"
	I0327 18:59:50.633292  568461 mustload.go:65] Loading cluster: addons-408183
	I0327 18:59:50.633469  568461 config.go:182] Loaded profile config "addons-408183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 18:59:50.633716  568461 cli_runner.go:164] Run: docker container inspect addons-408183 --format={{.State.Status}}
	I0327 18:59:50.650681  568461 addons.go:69] Setting registry=true in profile "addons-408183"
	I0327 18:59:50.650722  568461 addons.go:234] Setting addon registry=true in "addons-408183"
	I0327 18:59:50.650772  568461 host.go:66] Checking if "addons-408183" exists ...
	I0327 18:59:50.651224  568461 cli_runner.go:164] Run: docker container inspect addons-408183 --format={{.State.Status}}
	I0327 18:59:50.651372  568461 addons.go:69] Setting ingress=true in profile "addons-408183"
	I0327 18:59:50.651391  568461 addons.go:234] Setting addon ingress=true in "addons-408183"
	I0327 18:59:50.651417  568461 host.go:66] Checking if "addons-408183" exists ...
	I0327 18:59:50.674165  568461 addons.go:69] Setting storage-provisioner=true in profile "addons-408183"
	I0327 18:59:50.674215  568461 addons.go:234] Setting addon storage-provisioner=true in "addons-408183"
	I0327 18:59:50.674256  568461 host.go:66] Checking if "addons-408183" exists ...
	I0327 18:59:50.675220  568461 cli_runner.go:164] Run: docker container inspect addons-408183 --format={{.State.Status}}
	I0327 18:59:50.681405  568461 cli_runner.go:164] Run: docker container inspect addons-408183 --format={{.State.Status}}
	I0327 18:59:50.705094  568461 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-408183"
	I0327 18:59:50.705132  568461 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-408183"
	I0327 18:59:50.705430  568461 cli_runner.go:164] Run: docker container inspect addons-408183 --format={{.State.Status}}
	I0327 18:59:50.732977  568461 addons.go:69] Setting volumesnapshots=true in profile "addons-408183"
	I0327 18:59:50.733017  568461 addons.go:234] Setting addon volumesnapshots=true in "addons-408183"
	I0327 18:59:50.733056  568461 host.go:66] Checking if "addons-408183" exists ...
	I0327 18:59:50.733479  568461 cli_runner.go:164] Run: docker container inspect addons-408183 --format={{.State.Status}}
	I0327 18:59:50.774259  568461 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0327 18:59:50.781393  568461 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0327 18:59:50.781413  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0327 18:59:50.781469  568461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-408183
	I0327 18:59:50.785009  568461 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0327 18:59:50.781180  568461 addons.go:234] Setting addon default-storageclass=true in "addons-408183"
	I0327 18:59:50.786181  568461 host.go:66] Checking if "addons-408183" exists ...
	I0327 18:59:50.789169  568461 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0327 18:59:50.789184  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0327 18:59:50.789245  568461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-408183
	I0327 18:59:50.803663  568461 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0327 18:59:50.808463  568461 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0327 18:59:50.810336  568461 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0327 18:59:50.812421  568461 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0327 18:59:50.810469  568461 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0327 18:59:50.810474  568461 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0327 18:59:50.810478  568461 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0327 18:59:50.810520  568461 host.go:66] Checking if "addons-408183" exists ...
	I0327 18:59:50.816072  568461 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0327 18:59:50.816094  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0327 18:59:50.816144  568461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-408183
	I0327 18:59:50.815076  568461 cli_runner.go:164] Run: docker container inspect addons-408183 --format={{.State.Status}}
	I0327 18:59:50.841565  568461 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0327 18:59:50.845125  568461 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0327 18:59:50.849305  568461 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0327 18:59:50.849386  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0327 18:59:50.849494  568461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-408183
	I0327 18:59:50.855272  568461 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0327 18:59:50.855300  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0327 18:59:50.855368  568461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-408183
	I0327 18:59:50.862486  568461 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0327 18:59:50.862512  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0327 18:59:50.862577  568461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-408183
	I0327 18:59:50.886165  568461 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0327 18:59:50.881114  568461 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-408183"
	I0327 18:59:50.893864  568461 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0327 18:59:50.888893  568461 out.go:177]   - Using image docker.io/registry:2.8.3
	I0327 18:59:50.888954  568461 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 18:59:50.888958  568461 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0327 18:59:50.888988  568461 host.go:66] Checking if "addons-408183" exists ...
	I0327 18:59:50.897728  568461 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0327 18:59:50.899479  568461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/addons-408183/id_rsa Username:docker}
	I0327 18:59:50.900066  568461 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0327 18:59:50.900533  568461 cli_runner.go:164] Run: docker container inspect addons-408183 --format={{.State.Status}}
	I0327 18:59:50.900545  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0327 18:59:50.906382  568461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-408183
	I0327 18:59:50.906476  568461 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 18:59:50.908069  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0327 18:59:50.908130  568461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-408183
	I0327 18:59:50.912740  568461 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0327 18:59:50.922351  568461 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0327 18:59:50.922373  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0327 18:59:50.922437  568461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-408183
	I0327 18:59:50.927386  568461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/addons-408183/id_rsa Username:docker}
	I0327 18:59:50.912941  568461 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0327 18:59:50.932381  568461 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0327 18:59:50.934479  568461 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0327 18:59:50.936909  568461 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0327 18:59:50.940111  568461 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0327 18:59:50.940131  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0327 18:59:50.940194  568461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-408183
	I0327 18:59:50.937080  568461 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0327 18:59:50.940428  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0327 18:59:50.940468  568461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-408183
	I0327 18:59:51.011434  568461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/addons-408183/id_rsa Username:docker}
	I0327 18:59:51.012608  568461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/addons-408183/id_rsa Username:docker}
	I0327 18:59:51.036384  568461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/addons-408183/id_rsa Username:docker}
	I0327 18:59:51.037547  568461 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0327 18:59:51.037572  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0327 18:59:51.037641  568461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-408183
	I0327 18:59:51.067025  568461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/addons-408183/id_rsa Username:docker}
	I0327 18:59:51.078087  568461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/addons-408183/id_rsa Username:docker}
	I0327 18:59:51.087882  568461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/addons-408183/id_rsa Username:docker}
	I0327 18:59:51.104932  568461 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0327 18:59:51.104173  568461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/addons-408183/id_rsa Username:docker}
	I0327 18:59:51.113008  568461 out.go:177]   - Using image docker.io/busybox:stable
	I0327 18:59:51.124351  568461 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0327 18:59:51.124387  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0327 18:59:51.124489  568461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-408183
	I0327 18:59:51.135999  568461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/addons-408183/id_rsa Username:docker}
	I0327 18:59:51.153202  568461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/addons-408183/id_rsa Username:docker}
	I0327 18:59:51.154123  568461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/addons-408183/id_rsa Username:docker}
	W0327 18:59:51.156206  568461 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0327 18:59:51.156233  568461 retry.go:31] will retry after 348.307568ms: ssh: handshake failed: EOF
	I0327 18:59:51.178146  568461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/addons-408183/id_rsa Username:docker}
	I0327 18:59:51.214779  568461 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 18:59:51.396116  568461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0327 18:59:51.411554  568461 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0327 18:59:51.411581  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0327 18:59:51.480678  568461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0327 18:59:51.483722  568461 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0327 18:59:51.483746  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0327 18:59:51.504811  568461 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0327 18:59:51.504839  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0327 18:59:51.507841  568461 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0327 18:59:51.507872  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0327 18:59:51.573300  568461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0327 18:59:51.580052  568461 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0327 18:59:51.580077  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0327 18:59:51.588324  568461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 18:59:51.603060  568461 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0327 18:59:51.603086  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0327 18:59:51.607251  568461 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0327 18:59:51.607301  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0327 18:59:51.608225  568461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0327 18:59:51.611369  568461 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0327 18:59:51.611403  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0327 18:59:51.628968  568461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0327 18:59:51.640442  568461 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0327 18:59:51.640463  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0327 18:59:51.665676  568461 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0327 18:59:51.665699  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0327 18:59:51.720615  568461 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0327 18:59:51.720690  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0327 18:59:51.758315  568461 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0327 18:59:51.758340  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0327 18:59:51.768724  568461 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0327 18:59:51.768751  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0327 18:59:51.778486  568461 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0327 18:59:51.778514  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0327 18:59:51.778958  568461 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0327 18:59:51.778975  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0327 18:59:51.867932  568461 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0327 18:59:51.867996  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0327 18:59:51.877777  568461 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0327 18:59:51.877856  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0327 18:59:51.949386  568461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0327 18:59:51.966055  568461 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0327 18:59:51.966082  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0327 18:59:51.972128  568461 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0327 18:59:51.972156  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0327 18:59:51.980530  568461 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0327 18:59:51.980557  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0327 18:59:51.994474  568461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0327 18:59:52.027290  568461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0327 18:59:52.030436  568461 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0327 18:59:52.030474  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0327 18:59:52.078682  568461 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0327 18:59:52.078709  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0327 18:59:52.140131  568461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0327 18:59:52.208525  568461 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0327 18:59:52.208552  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0327 18:59:52.248572  568461 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0327 18:59:52.248596  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0327 18:59:52.253679  568461 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0327 18:59:52.253704  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0327 18:59:52.371615  568461 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0327 18:59:52.371635  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0327 18:59:52.404519  568461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0327 18:59:52.408174  568461 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0327 18:59:52.408197  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0327 18:59:52.586297  568461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0327 18:59:52.640620  568461 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0327 18:59:52.640647  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0327 18:59:52.747749  568461 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0327 18:59:52.747773  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0327 18:59:52.825701  568461 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0327 18:59:52.825725  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0327 18:59:52.910561  568461 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0327 18:59:52.910587  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0327 18:59:52.992888  568461 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.151288004s)
	I0327 18:59:52.992917  568461 start.go:948] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0327 18:59:52.993837  568461 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.779034196s)
	I0327 18:59:52.994588  568461 node_ready.go:35] waiting up to 6m0s for node "addons-408183" to be "Ready" ...
	I0327 18:59:53.007516  568461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0327 18:59:53.875173  568461 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-408183" context rescaled to 1 replicas
	I0327 18:59:55.126020  568461 node_ready.go:53] node "addons-408183" has status "Ready":"False"
	I0327 18:59:55.167919  568461 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.68720251s)
	I0327 18:59:55.167968  568461 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.594634564s)
	I0327 18:59:55.167997  568461 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.771855612s)
	I0327 18:59:55.677011  568461 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.088650732s)
	I0327 18:59:55.677067  568461 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.068815846s)
	I0327 18:59:55.945520  568461 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.31651894s)
	I0327 18:59:55.945724  568461 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.996302674s)
	I0327 18:59:55.945754  568461 addons.go:470] Verifying addon registry=true in "addons-408183"
	I0327 18:59:55.947794  568461 out.go:177] * Verifying registry addon...
	I0327 18:59:55.950615  568461 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0327 18:59:55.985204  568461 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0327 18:59:55.985238  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 18:59:56.493261  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 18:59:56.940854  568461 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.800682949s)
	I0327 18:59:56.943238  568461 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-408183 service yakd-dashboard -n yakd-dashboard
	
	I0327 18:59:56.941062  568461 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.913744977s)
	I0327 18:59:56.941198  568461 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.536647344s)
	I0327 18:59:56.941221  568461 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.946726491s)
	I0327 18:59:56.941253  568461 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.354918027s)
	I0327 18:59:56.945449  568461 addons.go:470] Verifying addon metrics-server=true in "addons-408183"
	W0327 18:59:56.945504  568461 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0327 18:59:56.945550  568461 retry.go:31] will retry after 287.511036ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0327 18:59:56.945589  568461 addons.go:470] Verifying addon ingress=true in "addons-408183"
	I0327 18:59:56.949289  568461 out.go:177] * Verifying ingress addon...
	I0327 18:59:56.952678  568461 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0327 18:59:56.961144  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 18:59:56.965151  568461 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0327 18:59:56.965171  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 18:59:57.175428  568461 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.167862773s)
	I0327 18:59:57.175512  568461 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-408183"
	I0327 18:59:57.177591  568461 out.go:177] * Verifying csi-hostpath-driver addon...
	I0327 18:59:57.180701  568461 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0327 18:59:57.188983  568461 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0327 18:59:57.189053  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 18:59:57.233689  568461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0327 18:59:57.457414  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 18:59:57.458586  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 18:59:57.497809  568461 node_ready.go:53] node "addons-408183" has status "Ready":"False"
	I0327 18:59:57.684909  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 18:59:57.798625  568461 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0327 18:59:57.798713  568461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-408183
	I0327 18:59:57.816477  568461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/addons-408183/id_rsa Username:docker}
	I0327 18:59:57.940762  568461 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0327 18:59:57.955065  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 18:59:57.958831  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 18:59:57.982475  568461 addons.go:234] Setting addon gcp-auth=true in "addons-408183"
	I0327 18:59:57.982583  568461 host.go:66] Checking if "addons-408183" exists ...
	I0327 18:59:57.983397  568461 cli_runner.go:164] Run: docker container inspect addons-408183 --format={{.State.Status}}
	I0327 18:59:58.010385  568461 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0327 18:59:58.010528  568461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-408183
	I0327 18:59:58.044007  568461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/addons-408183/id_rsa Username:docker}
	I0327 18:59:58.186360  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 18:59:58.455345  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 18:59:58.458507  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 18:59:58.686165  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 18:59:58.965367  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 18:59:58.970051  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 18:59:59.189184  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 18:59:59.457341  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 18:59:59.458485  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 18:59:59.501770  568461 node_ready.go:53] node "addons-408183" has status "Ready":"False"
	I0327 18:59:59.686270  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 18:59:59.957886  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 18:59:59.959366  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:00.189391  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:00.512961  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:00.513919  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:00.695878  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:00.736411  568461 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.725988263s)
	I0327 19:00:00.750761  568461 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0327 19:00:00.736864  568461 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.50312877s)
	I0327 19:00:00.770240  568461 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0327 19:00:00.772203  568461 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0327 19:00:00.772236  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0327 19:00:00.883035  568461 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0327 19:00:00.883069  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0327 19:00:00.962644  568461 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0327 19:00:00.962672  568461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0327 19:00:00.963216  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:00.964622  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:00.991752  568461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0327 19:00:01.186917  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:01.462435  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:01.476564  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:01.687219  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:01.878317  568461 addons.go:470] Verifying addon gcp-auth=true in "addons-408183"
	I0327 19:00:01.880446  568461 out.go:177] * Verifying gcp-auth addon...
	I0327 19:00:01.883174  568461 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0327 19:00:01.909158  568461 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0327 19:00:01.909227  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:01.955666  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:01.958378  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:02.026194  568461 node_ready.go:53] node "addons-408183" has status "Ready":"False"
	I0327 19:00:02.185634  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:02.388531  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:02.458700  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:02.463580  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:02.686643  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:02.886976  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:02.957038  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:02.958286  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:03.186836  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:03.389381  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:03.455772  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:03.457860  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:03.685887  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:03.887515  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:03.955119  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:03.957347  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:04.185655  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:04.386865  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:04.456516  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:04.456748  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:04.498032  568461 node_ready.go:53] node "addons-408183" has status "Ready":"False"
	I0327 19:00:04.685692  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:04.887352  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:04.956161  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:04.957529  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:05.185858  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:05.386975  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:05.455681  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:05.457013  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:05.685247  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:05.887143  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:05.955399  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:05.957492  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:06.185934  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:06.387684  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:06.455555  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:06.457594  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:06.498343  568461 node_ready.go:53] node "addons-408183" has status "Ready":"False"
	I0327 19:00:06.686521  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:06.888101  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:06.955998  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:06.958885  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:07.185081  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:07.386901  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:07.454671  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:07.458256  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:07.685611  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:07.886817  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:07.956520  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:07.958165  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:08.185566  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:08.387282  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:08.455625  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:08.457684  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:08.685335  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:08.887792  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:08.956427  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:08.958141  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:08.998532  568461 node_ready.go:53] node "addons-408183" has status "Ready":"False"
	I0327 19:00:09.185542  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:09.387569  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:09.456249  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:09.458743  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:09.685817  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:09.886808  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:09.955507  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:09.957762  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:10.186313  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:10.387204  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:10.456946  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:10.462597  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:10.686076  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:10.887286  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:10.956313  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:10.957688  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:11.185693  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:11.386690  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:11.454831  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:11.456874  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:11.498209  568461 node_ready.go:53] node "addons-408183" has status "Ready":"False"
	I0327 19:00:11.685838  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:11.887857  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:11.956031  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:11.957341  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:12.185141  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:12.386692  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:12.456804  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:12.455826  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:12.685002  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:12.886599  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:12.957707  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:12.958126  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:13.184975  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:13.387367  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:13.455864  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:13.458198  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:13.498615  568461 node_ready.go:53] node "addons-408183" has status "Ready":"False"
	I0327 19:00:13.684977  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:13.886960  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:13.957237  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:13.957406  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:14.185263  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:14.387501  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:14.456326  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:14.457803  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:14.685762  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:14.886616  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:14.955784  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:14.957058  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:15.185858  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:15.386798  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:15.454969  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:15.456894  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:15.499352  568461 node_ready.go:53] node "addons-408183" has status "Ready":"False"
	I0327 19:00:15.685482  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:15.887106  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:15.956098  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:15.957521  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:16.185347  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:16.386977  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:16.455713  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:16.456982  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:16.686191  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:16.886776  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:16.957568  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:16.959052  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:17.185807  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:17.387048  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:17.455870  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:17.458247  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:17.686522  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:17.888012  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:17.955830  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:17.958577  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:17.998142  568461 node_ready.go:53] node "addons-408183" has status "Ready":"False"
	I0327 19:00:18.186040  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:18.387051  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:18.455636  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:18.456961  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:18.685240  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:18.887119  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:18.956688  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:18.958203  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:19.185365  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:19.387658  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:19.455533  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:19.457567  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:19.685300  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:19.887398  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:19.955590  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:19.958257  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:19.998185  568461 node_ready.go:53] node "addons-408183" has status "Ready":"False"
	I0327 19:00:20.185231  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:20.393857  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:20.455295  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:20.457249  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:20.685249  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:20.887821  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:20.955751  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:20.957805  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:21.185617  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:21.388674  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:21.456200  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:21.457811  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:21.685493  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:21.887332  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:21.959517  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:21.960611  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:22.005173  568461 node_ready.go:53] node "addons-408183" has status "Ready":"False"
	I0327 19:00:22.185121  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:22.388203  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:22.456456  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:22.457811  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:22.685053  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:22.887331  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:22.956552  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:22.957877  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:23.185692  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:23.387639  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:23.454980  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:23.456955  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:23.685257  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:23.886803  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:23.959266  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:23.960206  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:24.031016  568461 node_ready.go:49] node "addons-408183" has status "Ready":"True"
	I0327 19:00:24.031055  568461 node_ready.go:38] duration metric: took 31.03643195s for node "addons-408183" to be "Ready" ...
	I0327 19:00:24.031067  568461 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 19:00:24.051560  568461 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-bhhnt" in "kube-system" namespace to be "Ready" ...
	I0327 19:00:24.204453  568461 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0327 19:00:24.204486  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:24.409669  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:24.479633  568461 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0327 19:00:24.479658  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:24.481147  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:24.710935  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:24.904459  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:24.957881  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:24.958475  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:25.188444  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:25.411567  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:25.463238  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:25.464765  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:25.565628  568461 pod_ready.go:92] pod "coredns-76f75df574-bhhnt" in "kube-system" namespace has status "Ready":"True"
	I0327 19:00:25.565652  568461 pod_ready.go:81] duration metric: took 1.514050983s for pod "coredns-76f75df574-bhhnt" in "kube-system" namespace to be "Ready" ...
	I0327 19:00:25.565673  568461 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-408183" in "kube-system" namespace to be "Ready" ...
	I0327 19:00:25.584896  568461 pod_ready.go:92] pod "etcd-addons-408183" in "kube-system" namespace has status "Ready":"True"
	I0327 19:00:25.584923  568461 pod_ready.go:81] duration metric: took 19.241192ms for pod "etcd-addons-408183" in "kube-system" namespace to be "Ready" ...
	I0327 19:00:25.584938  568461 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-408183" in "kube-system" namespace to be "Ready" ...
	I0327 19:00:25.602806  568461 pod_ready.go:92] pod "kube-apiserver-addons-408183" in "kube-system" namespace has status "Ready":"True"
	I0327 19:00:25.602833  568461 pod_ready.go:81] duration metric: took 17.886731ms for pod "kube-apiserver-addons-408183" in "kube-system" namespace to be "Ready" ...
	I0327 19:00:25.602845  568461 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-408183" in "kube-system" namespace to be "Ready" ...
	I0327 19:00:25.629560  568461 pod_ready.go:92] pod "kube-controller-manager-addons-408183" in "kube-system" namespace has status "Ready":"True"
	I0327 19:00:25.629586  568461 pod_ready.go:81] duration metric: took 26.733204ms for pod "kube-controller-manager-addons-408183" in "kube-system" namespace to be "Ready" ...
	I0327 19:00:25.629607  568461 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bfs7l" in "kube-system" namespace to be "Ready" ...
	I0327 19:00:25.636564  568461 pod_ready.go:92] pod "kube-proxy-bfs7l" in "kube-system" namespace has status "Ready":"True"
	I0327 19:00:25.636597  568461 pod_ready.go:81] duration metric: took 6.981204ms for pod "kube-proxy-bfs7l" in "kube-system" namespace to be "Ready" ...
	I0327 19:00:25.636608  568461 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-408183" in "kube-system" namespace to be "Ready" ...
	I0327 19:00:25.687792  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:25.888456  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:25.963601  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:25.964128  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:25.999227  568461 pod_ready.go:92] pod "kube-scheduler-addons-408183" in "kube-system" namespace has status "Ready":"True"
	I0327 19:00:25.999259  568461 pod_ready.go:81] duration metric: took 362.642627ms for pod "kube-scheduler-addons-408183" in "kube-system" namespace to be "Ready" ...
	I0327 19:00:25.999272  568461 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-69cf46c98-qwvb6" in "kube-system" namespace to be "Ready" ...
	I0327 19:00:26.195438  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:26.387617  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:26.458145  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:26.460415  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:26.687558  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:26.895020  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:26.959688  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:26.961137  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:27.188698  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:27.387062  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:27.456314  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:27.459383  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:27.686681  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:27.887814  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:27.957220  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:27.957369  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:28.009606  568461 pod_ready.go:102] pod "metrics-server-69cf46c98-qwvb6" in "kube-system" namespace has status "Ready":"False"
	I0327 19:00:28.187243  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:28.392054  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:28.459387  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:28.466151  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:28.687746  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:28.887980  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:28.967773  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:28.968798  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:29.188122  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:29.388358  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:29.459386  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:29.461014  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:29.689204  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:29.888491  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:29.959010  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:29.961599  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:30.012742  568461 pod_ready.go:102] pod "metrics-server-69cf46c98-qwvb6" in "kube-system" namespace has status "Ready":"False"
	I0327 19:00:30.194172  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:30.390308  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:30.456323  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:30.459749  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:30.688974  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:30.888535  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:30.963911  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:30.965279  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:31.189670  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:31.387715  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:31.457506  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:31.460982  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:31.688223  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:31.889668  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:31.963579  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:31.963883  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:32.187898  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:32.389444  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:32.466782  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:32.471654  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:32.514957  568461 pod_ready.go:102] pod "metrics-server-69cf46c98-qwvb6" in "kube-system" namespace has status "Ready":"False"
	I0327 19:00:32.687261  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:32.888518  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:32.963411  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:32.963736  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:33.187879  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:33.389118  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:33.458590  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:33.463859  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:33.686412  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:33.887199  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:33.956175  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:33.962467  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:34.187467  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:34.388325  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:34.456174  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:34.459038  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:34.686648  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:34.887802  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:34.960436  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:34.961815  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:35.011451  568461 pod_ready.go:102] pod "metrics-server-69cf46c98-qwvb6" in "kube-system" namespace has status "Ready":"False"
	I0327 19:00:35.187858  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:35.387507  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:35.460952  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:35.463120  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:35.690048  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:35.890165  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:35.962726  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:35.964281  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:36.188146  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:36.388144  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:36.461285  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:36.465250  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:36.696426  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:36.889102  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:36.977045  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:36.987426  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:37.019819  568461 pod_ready.go:102] pod "metrics-server-69cf46c98-qwvb6" in "kube-system" namespace has status "Ready":"False"
	I0327 19:00:37.191037  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:37.387283  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:37.464733  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:37.465589  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:37.690401  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:37.889100  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:37.962011  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:37.964176  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:38.191298  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:38.388271  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:38.458660  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:38.471589  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:38.688094  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:38.887737  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:38.988086  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:38.997432  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:39.187228  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:39.387303  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:39.456910  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:39.461716  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:39.510828  568461 pod_ready.go:102] pod "metrics-server-69cf46c98-qwvb6" in "kube-system" namespace has status "Ready":"False"
	I0327 19:00:39.687241  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:39.887721  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:39.973077  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:39.973842  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:40.186469  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:40.387545  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:40.459078  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:40.459852  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:40.686985  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:40.886947  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:40.958367  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:40.960187  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:41.189975  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:41.387891  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:41.455646  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:41.459265  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:41.690949  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:41.887974  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:41.962503  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:41.969027  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:42.013264  568461 pod_ready.go:102] pod "metrics-server-69cf46c98-qwvb6" in "kube-system" namespace has status "Ready":"False"
	I0327 19:00:42.188497  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:42.387598  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:42.456670  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:42.459427  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:42.687246  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:42.887579  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:42.959176  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:42.960166  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:43.187465  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:43.405879  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:43.459006  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:43.463692  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:43.687262  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:43.887147  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:43.957171  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:43.960351  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:44.186791  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:44.403316  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:44.455840  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:44.457990  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:44.511043  568461 pod_ready.go:92] pod "metrics-server-69cf46c98-qwvb6" in "kube-system" namespace has status "Ready":"True"
	I0327 19:00:44.511072  568461 pod_ready.go:81] duration metric: took 18.511791897s for pod "metrics-server-69cf46c98-qwvb6" in "kube-system" namespace to be "Ready" ...
	I0327 19:00:44.511092  568461 pod_ready.go:38] duration metric: took 20.48001117s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 19:00:44.511137  568461 api_server.go:52] waiting for apiserver process to appear ...
	I0327 19:00:44.511225  568461 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 19:00:44.524986  568461 api_server.go:72] duration metric: took 53.926622106s to wait for apiserver process to appear ...
	I0327 19:00:44.525011  568461 api_server.go:88] waiting for apiserver healthz status ...
	I0327 19:00:44.525054  568461 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:00:44.533740  568461 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0327 19:00:44.535013  568461 api_server.go:141] control plane version: v1.29.3
	I0327 19:00:44.535037  568461 api_server.go:131] duration metric: took 10.019026ms to wait for apiserver health ...
	I0327 19:00:44.535046  568461 system_pods.go:43] waiting for kube-system pods to appear ...
	I0327 19:00:44.544927  568461 system_pods.go:59] 18 kube-system pods found
	I0327 19:00:44.544968  568461 system_pods.go:61] "coredns-76f75df574-bhhnt" [f636e147-fd58-4509-9cd1-c689a449a3fe] Running
	I0327 19:00:44.544975  568461 system_pods.go:61] "csi-hostpath-attacher-0" [265c104a-1743-46ee-82d7-8d2291086a57] Running
	I0327 19:00:44.544979  568461 system_pods.go:61] "csi-hostpath-resizer-0" [f1fd73f3-c762-43c3-be39-1be7911e4191] Running
	I0327 19:00:44.544987  568461 system_pods.go:61] "csi-hostpathplugin-7mx7s" [c0ee14d2-0938-4d41-b2e6-a131b6255115] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0327 19:00:44.544993  568461 system_pods.go:61] "etcd-addons-408183" [dbdba5a6-4ec7-4259-970b-54733264f5ac] Running
	I0327 19:00:44.544998  568461 system_pods.go:61] "kindnet-kt86z" [81e2b362-4107-4404-b219-c11bbf7df6b1] Running
	I0327 19:00:44.545002  568461 system_pods.go:61] "kube-apiserver-addons-408183" [3a2c33e2-5c18-4f9e-be61-fc227abe46dc] Running
	I0327 19:00:44.545006  568461 system_pods.go:61] "kube-controller-manager-addons-408183" [8fccb017-6d12-43f7-88d2-dad4a0ddd191] Running
	I0327 19:00:44.545014  568461 system_pods.go:61] "kube-ingress-dns-minikube" [a2a7db0c-b3a4-491d-bef5-73febdd9a49a] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0327 19:00:44.545020  568461 system_pods.go:61] "kube-proxy-bfs7l" [bc31839f-2efa-434c-b39c-2fd6099d7203] Running
	I0327 19:00:44.545032  568461 system_pods.go:61] "kube-scheduler-addons-408183" [786a7432-056b-45f4-820e-455498104cad] Running
	I0327 19:00:44.545036  568461 system_pods.go:61] "metrics-server-69cf46c98-qwvb6" [c98ba762-4ee9-431f-8331-f7b0859f18c0] Running
	I0327 19:00:44.545044  568461 system_pods.go:61] "nvidia-device-plugin-daemonset-qkdl4" [78361e37-2128-4937-8e3a-361cd2184fa5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0327 19:00:44.545053  568461 system_pods.go:61] "registry-9wfw5" [6c3f112a-7577-41d6-b765-1344e134d816] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0327 19:00:44.545061  568461 system_pods.go:61] "registry-proxy-s2lkz" [ecdacb2c-048d-4a04-b2d2-648381ae630e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0327 19:00:44.545065  568461 system_pods.go:61] "snapshot-controller-58dbcc7b99-lxj4l" [3e197ded-0bf3-4605-88af-e91ed2b97019] Running
	I0327 19:00:44.545075  568461 system_pods.go:61] "snapshot-controller-58dbcc7b99-v2mbw" [a6f1f6c4-24b8-439b-9bc8-86b5b55141e7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0327 19:00:44.545080  568461 system_pods.go:61] "storage-provisioner" [940b5230-6bad-4bfd-9c0e-1284e9c59da3] Running
	I0327 19:00:44.545087  568461 system_pods.go:74] duration metric: took 10.016548ms to wait for pod list to return data ...
	I0327 19:00:44.545094  568461 default_sa.go:34] waiting for default service account to be created ...
	I0327 19:00:44.548502  568461 default_sa.go:45] found service account: "default"
	I0327 19:00:44.548527  568461 default_sa.go:55] duration metric: took 3.42236ms for default service account to be created ...
	I0327 19:00:44.548538  568461 system_pods.go:116] waiting for k8s-apps to be running ...
	I0327 19:00:44.559200  568461 system_pods.go:86] 18 kube-system pods found
	I0327 19:00:44.559234  568461 system_pods.go:89] "coredns-76f75df574-bhhnt" [f636e147-fd58-4509-9cd1-c689a449a3fe] Running
	I0327 19:00:44.559242  568461 system_pods.go:89] "csi-hostpath-attacher-0" [265c104a-1743-46ee-82d7-8d2291086a57] Running
	I0327 19:00:44.559247  568461 system_pods.go:89] "csi-hostpath-resizer-0" [f1fd73f3-c762-43c3-be39-1be7911e4191] Running
	I0327 19:00:44.559254  568461 system_pods.go:89] "csi-hostpathplugin-7mx7s" [c0ee14d2-0938-4d41-b2e6-a131b6255115] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0327 19:00:44.559260  568461 system_pods.go:89] "etcd-addons-408183" [dbdba5a6-4ec7-4259-970b-54733264f5ac] Running
	I0327 19:00:44.559266  568461 system_pods.go:89] "kindnet-kt86z" [81e2b362-4107-4404-b219-c11bbf7df6b1] Running
	I0327 19:00:44.559271  568461 system_pods.go:89] "kube-apiserver-addons-408183" [3a2c33e2-5c18-4f9e-be61-fc227abe46dc] Running
	I0327 19:00:44.559276  568461 system_pods.go:89] "kube-controller-manager-addons-408183" [8fccb017-6d12-43f7-88d2-dad4a0ddd191] Running
	I0327 19:00:44.559283  568461 system_pods.go:89] "kube-ingress-dns-minikube" [a2a7db0c-b3a4-491d-bef5-73febdd9a49a] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0327 19:00:44.559295  568461 system_pods.go:89] "kube-proxy-bfs7l" [bc31839f-2efa-434c-b39c-2fd6099d7203] Running
	I0327 19:00:44.559302  568461 system_pods.go:89] "kube-scheduler-addons-408183" [786a7432-056b-45f4-820e-455498104cad] Running
	I0327 19:00:44.559306  568461 system_pods.go:89] "metrics-server-69cf46c98-qwvb6" [c98ba762-4ee9-431f-8331-f7b0859f18c0] Running
	I0327 19:00:44.559315  568461 system_pods.go:89] "nvidia-device-plugin-daemonset-qkdl4" [78361e37-2128-4937-8e3a-361cd2184fa5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0327 19:00:44.559321  568461 system_pods.go:89] "registry-9wfw5" [6c3f112a-7577-41d6-b765-1344e134d816] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0327 19:00:44.559333  568461 system_pods.go:89] "registry-proxy-s2lkz" [ecdacb2c-048d-4a04-b2d2-648381ae630e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0327 19:00:44.559337  568461 system_pods.go:89] "snapshot-controller-58dbcc7b99-lxj4l" [3e197ded-0bf3-4605-88af-e91ed2b97019] Running
	I0327 19:00:44.559350  568461 system_pods.go:89] "snapshot-controller-58dbcc7b99-v2mbw" [a6f1f6c4-24b8-439b-9bc8-86b5b55141e7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0327 19:00:44.559355  568461 system_pods.go:89] "storage-provisioner" [940b5230-6bad-4bfd-9c0e-1284e9c59da3] Running
	I0327 19:00:44.559362  568461 system_pods.go:126] duration metric: took 10.8193ms to wait for k8s-apps to be running ...
	I0327 19:00:44.559378  568461 system_svc.go:44] waiting for kubelet service to be running ....
	I0327 19:00:44.559437  568461 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 19:00:44.571677  568461 system_svc.go:56] duration metric: took 12.289518ms WaitForService to wait for kubelet
	I0327 19:00:44.571707  568461 kubeadm.go:576] duration metric: took 53.973347701s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 19:00:44.571739  568461 node_conditions.go:102] verifying NodePressure condition ...
	I0327 19:00:44.575319  568461 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0327 19:00:44.575362  568461 node_conditions.go:123] node cpu capacity is 2
	I0327 19:00:44.575377  568461 node_conditions.go:105] duration metric: took 3.632796ms to run NodePressure ...
	I0327 19:00:44.575391  568461 start.go:240] waiting for startup goroutines ...
	I0327 19:00:44.686239  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:44.887031  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:44.958354  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:44.960145  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:45.203542  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:45.388013  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:45.457308  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:45.461315  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:45.686399  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:45.887189  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:45.956127  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:45.958647  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:46.187457  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:46.387146  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:46.471237  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:46.471482  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:46.687442  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:46.886881  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:46.963099  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:46.968746  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:47.186946  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:47.387910  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:47.476876  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:47.478157  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:47.695477  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:47.887193  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:47.959711  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:47.963868  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:48.186641  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:48.387635  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:48.456906  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:48.460528  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:48.687430  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:48.888264  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:48.984501  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:49.001337  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:49.194078  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:49.388405  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:49.461113  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:49.463877  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:49.688338  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:49.887318  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:49.966595  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:49.967921  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:50.190043  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:50.388158  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:50.457242  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:50.459132  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:50.687001  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:50.887886  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:50.956779  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:50.958849  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:51.187781  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:51.387809  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:51.456439  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:51.460294  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:51.687926  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:51.887926  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:51.972983  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:51.978443  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:52.187334  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:52.387792  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:52.456379  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:52.461789  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:52.688827  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:52.893604  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:52.964340  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:52.973707  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:53.187364  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:53.388492  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:53.459349  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:53.465325  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:53.687831  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:53.887740  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:53.958952  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:53.960990  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:54.187564  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:54.387412  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:54.456556  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:54.457569  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:54.687287  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:54.887663  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:54.960465  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:54.962028  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:55.187127  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:55.388253  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:55.457296  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:55.458211  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:55.686290  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:55.887719  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:55.957203  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:55.959272  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:56.187292  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:56.387994  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:56.456242  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:56.459509  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:56.689770  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:56.887526  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:56.966342  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:56.966488  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:57.188734  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:57.387528  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:57.456570  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:57.459889  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:57.687070  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:57.888483  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:57.960868  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:57.961761  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:58.186889  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:58.388227  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:58.462595  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:58.463606  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:58.691493  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:58.887683  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:58.967155  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:58.972381  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:59.187811  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:59.387352  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:59.457711  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:59.460264  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:00:59.687219  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:00:59.888927  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:00:59.974829  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:00:59.977142  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:01:00.246254  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:00.395825  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:00.467348  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:00.475368  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:01:00.688421  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:00.887564  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:00.956266  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:01:00.959496  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:01.189071  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:01.402223  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:01.459360  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:01:01.460424  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:01.693945  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:01.887728  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:01.960858  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:01:01.968861  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:02.187831  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:02.388650  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:02.458299  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:01:02.458708  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:02.692972  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:02.886749  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:02.961769  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:01:02.962064  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:03.186799  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:03.387737  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:03.455752  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:01:03.458460  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:03.695817  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:03.887971  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:03.960822  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:01:03.979461  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:04.187731  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:04.388332  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:04.459174  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:01:04.460651  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:04.690769  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:04.887693  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:04.959321  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:01:04.962517  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:05.190755  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:05.387503  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:05.456765  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:01:05.470761  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:05.686497  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:05.887867  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:05.957677  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:01:05.962449  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:06.190125  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:06.387854  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:06.467678  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:06.469730  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:01:06.686656  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:06.890968  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:06.979028  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:06.984232  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:01:07.187999  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:07.388327  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:07.457340  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0327 19:01:07.459943  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:07.686564  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:07.889807  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:07.958571  568461 kapi.go:107] duration metric: took 1m12.00795422s to wait for kubernetes.io/minikube-addons=registry ...
	I0327 19:01:07.960090  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:08.188192  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:08.386769  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:08.457246  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:08.687451  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:08.886373  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:09.012542  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:09.189333  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:09.386873  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:09.457251  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:09.688561  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:09.888156  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:09.980550  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:10.187716  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:10.389782  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:10.458206  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:10.688062  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:10.887631  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:10.973854  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:11.187023  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:11.389154  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:11.457730  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:11.687207  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:11.887347  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:11.957634  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:12.187833  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:12.388174  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:12.458003  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:12.687839  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:12.887456  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:12.958750  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:13.186832  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:13.389505  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:13.475055  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:13.690995  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:13.888377  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:13.958804  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:14.187689  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:14.387320  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:14.459785  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:14.687518  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:14.887102  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:14.959163  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:15.190011  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:15.387618  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:15.457123  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:15.687541  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:15.887535  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:15.958430  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:16.187883  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:16.388081  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:16.458538  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:16.687239  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:16.887377  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:16.959207  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:17.207975  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:17.387374  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:17.459407  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:17.688559  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:17.887094  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:17.959042  568461 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0327 19:01:18.187276  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:18.387709  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:18.457428  568461 kapi.go:107] duration metric: took 1m21.504745971s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0327 19:01:18.687989  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:18.891224  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:19.187220  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:19.387414  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:19.686176  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:19.887140  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:20.186625  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:20.387142  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:20.688390  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:20.887566  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:21.186382  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:21.387037  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:21.686849  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:21.887282  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:22.187295  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:22.388370  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:22.687124  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:22.887844  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:23.186880  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:23.387412  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:23.687046  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:23.888822  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:24.187244  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:24.387427  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:24.690429  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0327 19:01:24.889426  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:25.186563  568461 kapi.go:107] duration metric: took 1m28.005859371s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0327 19:01:25.387440  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:25.886860  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:26.387249  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:26.888332  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:27.387481  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:27.886796  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:28.388062  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:28.887960  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:29.386796  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:29.890089  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:30.387127  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:30.887049  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:31.387585  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:31.887686  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:32.386321  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:32.887382  568461 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0327 19:01:33.387629  568461 kapi.go:107] duration metric: took 1m31.504454255s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0327 19:01:33.389538  568461 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-408183 cluster.
	I0327 19:01:33.391744  568461 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0327 19:01:33.393645  568461 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0327 19:01:33.395501  568461 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, default-storageclass, storage-provisioner, nvidia-device-plugin, storage-provisioner-rancher, inspektor-gadget, yakd, metrics-server, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0327 19:01:33.397854  568461 addons.go:505] duration metric: took 1m42.799273529s for enable addons: enabled=[cloud-spanner ingress-dns default-storageclass storage-provisioner nvidia-device-plugin storage-provisioner-rancher inspektor-gadget yakd metrics-server volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0327 19:01:33.397923  568461 start.go:245] waiting for cluster config update ...
	I0327 19:01:33.397949  568461 start.go:254] writing updated cluster config ...
	I0327 19:01:33.398246  568461 ssh_runner.go:195] Run: rm -f paused
	I0327 19:01:33.782229  568461 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0327 19:01:33.784758  568461 out.go:177] * Done! kubectl is now configured to use "addons-408183" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 27 19:04:45 addons-408183 crio[890]: time="2024-03-27 19:04:45.953152087Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=678bcf52-1bf4-430f-8535-0e7868b53cf4 name=/runtime.v1.ImageService/ImageStatus
	Mar 27 19:04:45 addons-408183 crio[890]: time="2024-03-27 19:04:45.953364254Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=678bcf52-1bf4-430f-8535-0e7868b53cf4 name=/runtime.v1.ImageService/ImageStatus
	Mar 27 19:04:45 addons-408183 crio[890]: time="2024-03-27 19:04:45.954335866Z" level=info msg="Creating container: default/hello-world-app-5d77478584-tbdqv/hello-world-app" id=f9da2cf8-593a-4fa0-8ba1-1da7f54efd88 name=/runtime.v1.RuntimeService/CreateContainer
	Mar 27 19:04:45 addons-408183 crio[890]: time="2024-03-27 19:04:45.954433523Z" level=warning msg="Allowed annotations are specified for workload []"
	Mar 27 19:04:46 addons-408183 crio[890]: time="2024-03-27 19:04:46.018015951Z" level=info msg="Created container f04f6789afe9be331be36c2d1c5a7334a030a231c0774da15961e9068eb08589: default/hello-world-app-5d77478584-tbdqv/hello-world-app" id=f9da2cf8-593a-4fa0-8ba1-1da7f54efd88 name=/runtime.v1.RuntimeService/CreateContainer
	Mar 27 19:04:46 addons-408183 crio[890]: time="2024-03-27 19:04:46.019096444Z" level=info msg="Starting container: f04f6789afe9be331be36c2d1c5a7334a030a231c0774da15961e9068eb08589" id=93522a5c-3b19-4298-95b0-4d776be089a2 name=/runtime.v1.RuntimeService/StartContainer
	Mar 27 19:04:46 addons-408183 crio[890]: time="2024-03-27 19:04:46.028035093Z" level=info msg="Started container" PID=8087 containerID=f04f6789afe9be331be36c2d1c5a7334a030a231c0774da15961e9068eb08589 description=default/hello-world-app-5d77478584-tbdqv/hello-world-app id=93522a5c-3b19-4298-95b0-4d776be089a2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=33f50d469f972cff8601ae6fc395d76910bcf84b1334d1c5566bf38dc0296aba
	Mar 27 19:04:46 addons-408183 conmon[8076]: conmon f04f6789afe9be331be3 <ninfo>: container 8087 exited with status 1
	Mar 27 19:04:46 addons-408183 crio[890]: time="2024-03-27 19:04:46.525724968Z" level=info msg="Stopping container: 655b3f3bb8c31c486c3e40b35fbcd9df65ceb9438a649d63c4a6389af59673c7 (timeout: 2s)" id=42f85053-72a9-4cb2-b6fc-d297e28b0b7d name=/runtime.v1.RuntimeService/StopContainer
	Mar 27 19:04:46 addons-408183 crio[890]: time="2024-03-27 19:04:46.698032836Z" level=info msg="Removing container: e39a659f2114d714d909dd6c459fc7833076cbe1e748b6f3d3b2f99b2a0fef59" id=ebf9dc6f-2158-4f1f-9c39-3740a1097e15 name=/runtime.v1.RuntimeService/RemoveContainer
	Mar 27 19:04:46 addons-408183 crio[890]: time="2024-03-27 19:04:46.718926325Z" level=info msg="Removed container e39a659f2114d714d909dd6c459fc7833076cbe1e748b6f3d3b2f99b2a0fef59: default/hello-world-app-5d77478584-tbdqv/hello-world-app" id=ebf9dc6f-2158-4f1f-9c39-3740a1097e15 name=/runtime.v1.RuntimeService/RemoveContainer
	Mar 27 19:04:48 addons-408183 crio[890]: time="2024-03-27 19:04:48.531852549Z" level=warning msg="Stopping container 655b3f3bb8c31c486c3e40b35fbcd9df65ceb9438a649d63c4a6389af59673c7 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=42f85053-72a9-4cb2-b6fc-d297e28b0b7d name=/runtime.v1.RuntimeService/StopContainer
	Mar 27 19:04:48 addons-408183 conmon[4828]: conmon 655b3f3bb8c31c486c3e <ninfo>: container 4839 exited with status 137
	Mar 27 19:04:48 addons-408183 crio[890]: time="2024-03-27 19:04:48.670549079Z" level=info msg="Stopped container 655b3f3bb8c31c486c3e40b35fbcd9df65ceb9438a649d63c4a6389af59673c7: ingress-nginx/ingress-nginx-controller-65496f9567-czrvw/controller" id=42f85053-72a9-4cb2-b6fc-d297e28b0b7d name=/runtime.v1.RuntimeService/StopContainer
	Mar 27 19:04:48 addons-408183 crio[890]: time="2024-03-27 19:04:48.671205115Z" level=info msg="Stopping pod sandbox: 19c614f3034224b685dd0d3fa6741cdbaeb94c07d61839d3d732adeaa8236022" id=12c0d647-e4a5-46bf-9b1d-b5ca2b4a3a5d name=/runtime.v1.RuntimeService/StopPodSandbox
	Mar 27 19:04:48 addons-408183 crio[890]: time="2024-03-27 19:04:48.674588863Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-BF7QNKHXZ4YBKND2 - [0:0]\n:KUBE-HP-JRNXRAI7VEXYOGPT - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-JRNXRAI7VEXYOGPT\n-X KUBE-HP-BF7QNKHXZ4YBKND2\nCOMMIT\n"
	Mar 27 19:04:48 addons-408183 crio[890]: time="2024-03-27 19:04:48.676035524Z" level=info msg="Closing host port tcp:80"
	Mar 27 19:04:48 addons-408183 crio[890]: time="2024-03-27 19:04:48.676083311Z" level=info msg="Closing host port tcp:443"
	Mar 27 19:04:48 addons-408183 crio[890]: time="2024-03-27 19:04:48.677551814Z" level=info msg="Host port tcp:80 does not have an open socket"
	Mar 27 19:04:48 addons-408183 crio[890]: time="2024-03-27 19:04:48.677589179Z" level=info msg="Host port tcp:443 does not have an open socket"
	Mar 27 19:04:48 addons-408183 crio[890]: time="2024-03-27 19:04:48.677758663Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-65496f9567-czrvw Namespace:ingress-nginx ID:19c614f3034224b685dd0d3fa6741cdbaeb94c07d61839d3d732adeaa8236022 UID:662ea9ea-ba90-40c0-8979-14aa182883ff NetNS:/var/run/netns/4db64ba6-9df2-49e4-9893-9853788bdfdd Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Mar 27 19:04:48 addons-408183 crio[890]: time="2024-03-27 19:04:48.677899873Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-65496f9567-czrvw from CNI network \"kindnet\" (type=ptp)"
	Mar 27 19:04:48 addons-408183 crio[890]: time="2024-03-27 19:04:48.699648479Z" level=info msg="Stopped pod sandbox: 19c614f3034224b685dd0d3fa6741cdbaeb94c07d61839d3d732adeaa8236022" id=12c0d647-e4a5-46bf-9b1d-b5ca2b4a3a5d name=/runtime.v1.RuntimeService/StopPodSandbox
	Mar 27 19:04:48 addons-408183 crio[890]: time="2024-03-27 19:04:48.706212177Z" level=info msg="Removing container: 655b3f3bb8c31c486c3e40b35fbcd9df65ceb9438a649d63c4a6389af59673c7" id=9d0e04de-99f6-426d-bfd6-4a2620980c2f name=/runtime.v1.RuntimeService/RemoveContainer
	Mar 27 19:04:48 addons-408183 crio[890]: time="2024-03-27 19:04:48.722939565Z" level=info msg="Removed container 655b3f3bb8c31c486c3e40b35fbcd9df65ceb9438a649d63c4a6389af59673c7: ingress-nginx/ingress-nginx-controller-65496f9567-czrvw/controller" id=9d0e04de-99f6-426d-bfd6-4a2620980c2f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f04f6789afe9b       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                             7 seconds ago       Exited              hello-world-app           2                   33f50d469f972       hello-world-app-5d77478584-tbdqv
	7955e5cd75237       ghcr.io/headlamp-k8s/headlamp@sha256:1f277f42730106526a27560517a4c5f9253ccb2477be458986f44a791158a02c                        56 seconds ago      Running             headlamp                  0                   911ff2ad6138a       headlamp-5b77dbd7c4-tss7p
	0d5f11e0068da       docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742                              2 minutes ago       Running             nginx                     0                   b5fc103a8ef9b       nginx
	972873ec46e56       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69                 3 minutes ago       Running             gcp-auth                  0                   a57c459060a19       gcp-auth-7d69788767-rwbsx
	7aff2c7c07275       1a024e390dd050d584b5c93bb30810e8be713157ab713b0d77a7af14dfe88c1e                                                             3 minutes ago       Exited              patch                     3                   5f6b5b3c63a80       ingress-nginx-admission-patch-g98zk
	ca31cbad1b8fb       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                      0                   d3970b8b6552e       yakd-dashboard-9947fc6bf-4swzk
	37964f65d1de0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0b1098ef00acee905f9736f98dd151af0a38d0fef0ccf9fb5ad189b20933e5f8   4 minutes ago       Exited              create                    0                   17d0607a8de94       ingress-nginx-admission-create-mh6pz
	d954a86c4c307       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                             4 minutes ago       Running             coredns                   0                   1675d5fb2bc21       coredns-76f75df574-bhhnt
	394bfe3d7c196       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             4 minutes ago       Running             storage-provisioner       0                   8b4a60ff12fb9       storage-provisioner
	9ba6235ecb64c       4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d                                                             5 minutes ago       Running             kindnet-cni               0                   75b364b06e5d1       kindnet-kt86z
	8802f6dc94234       0e9b4a0d1e86d942f5ed93eaf751771e7602104cac5e15256c36967770ad2775                                                             5 minutes ago       Running             kube-proxy                0                   97299da841712       kube-proxy-bfs7l
	2d17b7f223c11       121d70d9a3805f44c7c587a60d9360495cf9d95129047f4818bb7110ec1ec195                                                             5 minutes ago       Running             kube-controller-manager   0                   9c05f1865e121       kube-controller-manager-addons-408183
	3286cce91da02       4b51f9f6bc9b9a68473278361df0e8985109b56c7b649532c6bffcab2a8c65fb                                                             5 minutes ago       Running             kube-scheduler            0                   7afe1f39244b5       kube-scheduler-addons-408183
	b67c50c57846a       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                             5 minutes ago       Running             etcd                      0                   089e720476e62       etcd-addons-408183
	97bf31aa30cb8       2581114f5709d3459ca39f243fd21fde75f2f60d205ffdcd57b4207c33980794                                                             5 minutes ago       Running             kube-apiserver            0                   4f36e0a54f12d       kube-apiserver-addons-408183
	
	
	==> coredns [d954a86c4c3076d134a4e7327b950b7845ce1549966401a6db9aaca0e1869b4e] <==
	[INFO] 10.244.0.19:35016 - 21439 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000063138s
	[INFO] 10.244.0.19:35016 - 29462 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000060611s
	[INFO] 10.244.0.19:35016 - 10534 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000061218s
	[INFO] 10.244.0.19:35016 - 64904 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000058519s
	[INFO] 10.244.0.19:35016 - 58254 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001332988s
	[INFO] 10.244.0.19:35016 - 36860 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001746959s
	[INFO] 10.244.0.19:35016 - 32285 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000057813s
	[INFO] 10.244.0.19:37267 - 6547 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000122296s
	[INFO] 10.244.0.19:37267 - 20334 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000073838s
	[INFO] 10.244.0.19:40842 - 10838 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000082346s
	[INFO] 10.244.0.19:37267 - 2919 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000106527s
	[INFO] 10.244.0.19:40842 - 2207 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000065444s
	[INFO] 10.244.0.19:37267 - 20810 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000097337s
	[INFO] 10.244.0.19:37267 - 60851 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000075298s
	[INFO] 10.244.0.19:40842 - 50377 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000077481s
	[INFO] 10.244.0.19:37267 - 26649 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00006683s
	[INFO] 10.244.0.19:40842 - 50836 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000074946s
	[INFO] 10.244.0.19:40842 - 31427 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000062646s
	[INFO] 10.244.0.19:40842 - 20909 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00004818s
	[INFO] 10.244.0.19:37267 - 8603 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001349521s
	[INFO] 10.244.0.19:40842 - 57762 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001552383s
	[INFO] 10.244.0.19:37267 - 30179 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001325981s
	[INFO] 10.244.0.19:37267 - 31540 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000101193s
	[INFO] 10.244.0.19:40842 - 23215 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002939476s
	[INFO] 10.244.0.19:40842 - 58128 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000127425s
	
	
	==> describe nodes <==
	Name:               addons-408183
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-408183
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=475b39f6a1dc94a0c7060d2eec10d9b995edcd28
	                    minikube.k8s.io/name=addons-408183
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_27T18_59_37_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-408183
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 18:59:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-408183
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 27 Mar 2024 19:04:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 27 Mar 2024 19:04:43 +0000   Wed, 27 Mar 2024 18:59:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 27 Mar 2024 19:04:43 +0000   Wed, 27 Mar 2024 18:59:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 27 Mar 2024 19:04:43 +0000   Wed, 27 Mar 2024 18:59:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 27 Mar 2024 19:04:43 +0000   Wed, 27 Mar 2024 19:00:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-408183
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 1ca58ea01d674e528007abd6e03a539b
	  System UUID:                beb0cb92-1755-42de-970b-4cde9b1240b3
	  Boot ID:                    561aadd0-a15d-4e78-9187-a38c38772b44
	  Kernel Version:             5.15.0-1056-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-tbdqv         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  gcp-auth                    gcp-auth-7d69788767-rwbsx                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  headlamp                    headlamp-5b77dbd7c4-tss7p                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                 coredns-76f75df574-bhhnt                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m3s
	  kube-system                 etcd-addons-408183                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m16s
	  kube-system                 kindnet-kt86z                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m3s
	  kube-system                 kube-apiserver-addons-408183             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 kube-controller-manager-addons-408183    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 kube-proxy-bfs7l                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	  kube-system                 kube-scheduler-addons-408183             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-4swzk           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     4m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m57s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m24s (x8 over 5m25s)  kubelet          Node addons-408183 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m24s (x8 over 5m25s)  kubelet          Node addons-408183 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m24s (x8 over 5m25s)  kubelet          Node addons-408183 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m16s                  kubelet          Node addons-408183 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m16s                  kubelet          Node addons-408183 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m16s                  kubelet          Node addons-408183 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m3s                   node-controller  Node addons-408183 event: Registered Node addons-408183 in Controller
	  Normal  NodeReady                4m30s                  kubelet          Node addons-408183 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001239] FS-Cache: O-key=[8] 'e93a5c0100000000'
	[  +0.000726] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.000979] FS-Cache: N-cookie d=00000000b2ab35c3{9p.inode} n=00000000a9e34122
	[  +0.001078] FS-Cache: N-key=[8] 'e93a5c0100000000'
	[  +0.002912] FS-Cache: Duplicate cookie detected
	[  +0.000716] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001018] FS-Cache: O-cookie d=00000000b2ab35c3{9p.inode} n=00000000748a5021
	[  +0.001141] FS-Cache: O-key=[8] 'e93a5c0100000000'
	[  +0.000736] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.000940] FS-Cache: N-cookie d=00000000b2ab35c3{9p.inode} n=000000006a51abc6
	[  +0.001067] FS-Cache: N-key=[8] 'e93a5c0100000000'
	[  +1.813216] FS-Cache: Duplicate cookie detected
	[  +0.000753] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001015] FS-Cache: O-cookie d=00000000b2ab35c3{9p.inode} n=0000000010f04798
	[  +0.001025] FS-Cache: O-key=[8] 'e83a5c0100000000'
	[  +0.000768] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000940] FS-Cache: N-cookie d=00000000b2ab35c3{9p.inode} n=000000009447d7be
	[  +0.001041] FS-Cache: N-key=[8] 'e83a5c0100000000'
	[  +0.266664] FS-Cache: Duplicate cookie detected
	[  +0.000736] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.000963] FS-Cache: O-cookie d=00000000b2ab35c3{9p.inode} n=0000000088c6ae44
	[  +0.001071] FS-Cache: O-key=[8] 'ee3a5c0100000000'
	[  +0.000829] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000963] FS-Cache: N-cookie d=00000000b2ab35c3{9p.inode} n=00000000a9e34122
	[  +0.001118] FS-Cache: N-key=[8] 'ee3a5c0100000000'
	
	
	==> etcd [b67c50c57846a10d3c1bae42aa28f177cac6e1baa5ee77f6f21b7882dbb1f65f] <==
	{"level":"info","ts":"2024-03-27T18:59:30.710577Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-27T18:59:30.710628Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-03-27T18:59:30.710681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-03-27T18:59:30.710713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-27T18:59:30.710748Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-03-27T18:59:30.710784Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-27T18:59:30.712982Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-408183 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-27T18:59:30.71319Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T18:59:30.713933Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-27T18:59:30.714307Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-27T18:59:30.715934Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-27T18:59:30.717998Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T18:59:30.718085Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T18:59:30.718113Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T18:59:30.718133Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-27T18:59:30.718142Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-27T18:59:30.71963Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-03-27T18:59:51.37044Z","caller":"traceutil/trace.go:171","msg":"trace[381739942] transaction","detail":"{read_only:false; response_revision:397; number_of_response:1; }","duration":"170.456819ms","start":"2024-03-27T18:59:51.199965Z","end":"2024-03-27T18:59:51.370422Z","steps":["trace[381739942] 'process raft request'  (duration: 82.350316ms)","trace[381739942] 'compare'  (duration: 88.018955ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-27T18:59:51.765186Z","caller":"traceutil/trace.go:171","msg":"trace[1576096640] transaction","detail":"{read_only:false; response_revision:402; number_of_response:1; }","duration":"107.987338ms","start":"2024-03-27T18:59:51.657182Z","end":"2024-03-27T18:59:51.765169Z","steps":["trace[1576096640] 'process raft request'  (duration: 104.815232ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-27T18:59:54.026281Z","caller":"traceutil/trace.go:171","msg":"trace[216165712] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"105.157769ms","start":"2024-03-27T18:59:53.921091Z","end":"2024-03-27T18:59:54.026249Z","steps":["trace[216165712] 'process raft request'  (duration: 78.368804ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-27T18:59:54.026623Z","caller":"traceutil/trace.go:171","msg":"trace[1536457198] transaction","detail":"{read_only:false; response_revision:428; number_of_response:1; }","duration":"105.379421ms","start":"2024-03-27T18:59:53.921231Z","end":"2024-03-27T18:59:54.02661Z","steps":["trace[1536457198] 'process raft request'  (duration: 89.059727ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-27T18:59:54.026835Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.456195ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-27T18:59:54.026972Z","caller":"traceutil/trace.go:171","msg":"trace[1257746177] range","detail":"{range_begin:/registry/clusterrolebindings/storage-provisioner; range_end:; response_count:0; response_revision:428; }","duration":"105.618172ms","start":"2024-03-27T18:59:53.921344Z","end":"2024-03-27T18:59:54.026962Z","steps":["trace[1257746177] 'agreement among raft nodes before linearized reading'  (duration: 105.434788ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-27T18:59:54.053145Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.580919ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:116"}
	{"level":"info","ts":"2024-03-27T18:59:54.053208Z","caller":"traceutil/trace.go:171","msg":"trace[1452307210] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:432; }","duration":"131.653517ms","start":"2024-03-27T18:59:53.921542Z","end":"2024-03-27T18:59:54.053196Z","steps":["trace[1452307210] 'agreement among raft nodes before linearized reading'  (duration: 131.501518ms)"],"step_count":1}
	
	
	==> gcp-auth [972873ec46e56b87f8fb9f20514b28f22856f71298017731d3651285254365a5] <==
	2024/03/27 19:01:32 GCP Auth Webhook started!
	2024/03/27 19:01:45 Ready to marshal response ...
	2024/03/27 19:01:45 Ready to write response ...
	2024/03/27 19:02:03 Ready to marshal response ...
	2024/03/27 19:02:03 Ready to write response ...
	2024/03/27 19:02:09 Ready to marshal response ...
	2024/03/27 19:02:09 Ready to write response ...
	2024/03/27 19:02:25 Ready to marshal response ...
	2024/03/27 19:02:25 Ready to write response ...
	2024/03/27 19:02:54 Ready to marshal response ...
	2024/03/27 19:02:54 Ready to write response ...
	2024/03/27 19:02:54 Ready to marshal response ...
	2024/03/27 19:02:54 Ready to write response ...
	2024/03/27 19:03:03 Ready to marshal response ...
	2024/03/27 19:03:03 Ready to write response ...
	2024/03/27 19:03:53 Ready to marshal response ...
	2024/03/27 19:03:53 Ready to write response ...
	2024/03/27 19:03:53 Ready to marshal response ...
	2024/03/27 19:03:53 Ready to write response ...
	2024/03/27 19:03:53 Ready to marshal response ...
	2024/03/27 19:03:53 Ready to write response ...
	2024/03/27 19:04:28 Ready to marshal response ...
	2024/03/27 19:04:28 Ready to write response ...
	
	
	==> kernel <==
	 19:04:54 up  2:47,  0 users,  load average: 0.65, 1.67, 2.26
	Linux addons-408183 5.15.0-1056-aws #61~20.04.1-Ubuntu SMP Wed Mar 13 17:45:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [9ba6235ecb64cb6a3b3f7e47b4a6ddeaf7d87db2f697f42a6a526d25943e75b4] <==
	I0327 19:02:53.649781       1 main.go:227] handling current node
	I0327 19:03:03.658783       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 19:03:03.658967       1 main.go:227] handling current node
	I0327 19:03:13.669621       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 19:03:13.669650       1 main.go:227] handling current node
	I0327 19:03:23.682067       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 19:03:23.682095       1 main.go:227] handling current node
	I0327 19:03:33.686074       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 19:03:33.686103       1 main.go:227] handling current node
	I0327 19:03:43.698572       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 19:03:43.698599       1 main.go:227] handling current node
	I0327 19:03:53.707296       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 19:03:53.707322       1 main.go:227] handling current node
	I0327 19:04:03.721075       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 19:04:03.721110       1 main.go:227] handling current node
	I0327 19:04:13.725678       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 19:04:13.725710       1 main.go:227] handling current node
	I0327 19:04:23.738380       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 19:04:23.738409       1 main.go:227] handling current node
	I0327 19:04:33.751117       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 19:04:33.751147       1 main.go:227] handling current node
	I0327 19:04:43.763369       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 19:04:43.763396       1 main.go:227] handling current node
	I0327 19:04:53.776193       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 19:04:53.776225       1 main.go:227] handling current node
	
	
	==> kube-apiserver [97bf31aa30cb89ab69bbe0659c4ed53ee511dd8e8ffa43bca4c0472967024538] <==
	E0327 19:00:44.431841       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I0327 19:02:03.513876       1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0327 19:02:04.549787       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0327 19:02:09.118972       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0327 19:02:09.439372       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.110.251"}
	I0327 19:02:11.515997       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0327 19:02:42.034139       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0327 19:02:42.034202       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0327 19:02:42.056357       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0327 19:02:42.056424       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0327 19:02:42.121575       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0327 19:02:42.121633       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0327 19:02:42.134652       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0327 19:02:42.134797       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0327 19:02:43.122087       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0327 19:02:43.135793       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0327 19:02:43.161749       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0327 19:02:45.381630       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0327 19:03:04.870468       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0327 19:03:04.880832       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0327 19:03:04.891545       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0327 19:03:19.892854       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0327 19:03:53.499164       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.155.203"}
	I0327 19:04:28.560355       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.62.218"}
	E0327 19:04:45.578436       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [2d17b7f223c118e00912dcd24030b06728477e98afda49fdc5d2ef1ceed1b6bd] <==
	W0327 19:04:01.193993       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 19:04:01.194035       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0327 19:04:05.226805       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 19:04:05.226842       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0327 19:04:06.642612       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 19:04:06.642649       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0327 19:04:28.282172       1 event.go:376] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0327 19:04:28.299958       1 event.go:376] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-tbdqv"
	I0327 19:04:28.312592       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="30.25977ms"
	I0327 19:04:28.332314       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="19.583591ms"
	I0327 19:04:28.332606       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="38.72µs"
	I0327 19:04:28.337249       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="36.759µs"
	I0327 19:04:31.681587       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="39.917µs"
	I0327 19:04:32.678425       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="39.967µs"
	I0327 19:04:33.681972       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="74.272µs"
	W0327 19:04:35.175556       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 19:04:35.175596       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0327 19:04:45.482513       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0327 19:04:45.488286       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="6.875µs"
	I0327 19:04:45.492036       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0327 19:04:46.717443       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="66.092µs"
	W0327 19:04:48.280396       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 19:04:48.280522       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0327 19:04:53.282273       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0327 19:04:53.282306       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [8802f6dc9423427a1a7a7ad3e18395024ad3b47f020c2fa56887fef7c5869b7c] <==
	I0327 18:59:56.146153       1 server_others.go:72] "Using iptables proxy"
	I0327 18:59:56.313405       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0327 18:59:56.472405       1 server.go:652] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0327 18:59:56.472503       1 server_others.go:168] "Using iptables Proxier"
	I0327 18:59:56.474282       1 server_others.go:512] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0327 18:59:56.474342       1 server_others.go:529] "Defaulting to no-op detect-local"
	I0327 18:59:56.474374       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0327 18:59:56.474569       1 server.go:865] "Version info" version="v1.29.3"
	I0327 18:59:56.474580       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0327 18:59:56.476201       1 config.go:188] "Starting service config controller"
	I0327 18:59:56.476265       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0327 18:59:56.476311       1 config.go:97] "Starting endpoint slice config controller"
	I0327 18:59:56.476339       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0327 18:59:56.476828       1 config.go:315] "Starting node config controller"
	I0327 18:59:56.476886       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0327 18:59:56.576423       1 shared_informer.go:318] Caches are synced for service config
	I0327 18:59:56.576895       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0327 18:59:56.576951       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [3286cce91da02054ac8d5e8f19e6d24f3100e9f1a92af86144416e06af896838] <==
	E0327 18:59:34.048974       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0327 18:59:34.048956       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0327 18:59:34.049064       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0327 18:59:34.049081       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0327 18:59:34.049126       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0327 18:59:34.049142       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0327 18:59:34.049190       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0327 18:59:34.049232       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0327 18:59:34.049248       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0327 18:59:34.049273       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0327 18:59:34.049294       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0327 18:59:34.049365       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0327 18:59:34.049381       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0327 18:59:34.049372       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0327 18:59:34.049328       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0327 18:59:34.049471       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0327 18:59:34.986177       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0327 18:59:34.986327       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0327 18:59:35.052446       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0327 18:59:35.053101       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0327 18:59:35.090988       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0327 18:59:35.091135       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0327 18:59:35.205773       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0327 18:59:35.205812       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0327 18:59:37.729671       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 27 19:04:42 addons-408183 kubelet[1465]: I0327 19:04:42.951421    1465 scope.go:117] "RemoveContainer" containerID="e4d675d4eff1f5f880e6ca49e1ef935f79f74121bd10455e31512e46667c62f7"
	Mar 27 19:04:42 addons-408183 kubelet[1465]: E0327 19:04:42.951680    1465 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(a2a7db0c-b3a4-491d-bef5-73febdd9a49a)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="a2a7db0c-b3a4-491d-bef5-73febdd9a49a"
	Mar 27 19:04:44 addons-408183 kubelet[1465]: I0327 19:04:44.526676    1465 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pwjb\" (UniqueName: \"kubernetes.io/projected/a2a7db0c-b3a4-491d-bef5-73febdd9a49a-kube-api-access-5pwjb\") pod \"a2a7db0c-b3a4-491d-bef5-73febdd9a49a\" (UID: \"a2a7db0c-b3a4-491d-bef5-73febdd9a49a\") "
	Mar 27 19:04:44 addons-408183 kubelet[1465]: I0327 19:04:44.531843    1465 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2a7db0c-b3a4-491d-bef5-73febdd9a49a-kube-api-access-5pwjb" (OuterVolumeSpecName: "kube-api-access-5pwjb") pod "a2a7db0c-b3a4-491d-bef5-73febdd9a49a" (UID: "a2a7db0c-b3a4-491d-bef5-73febdd9a49a"). InnerVolumeSpecName "kube-api-access-5pwjb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 27 19:04:44 addons-408183 kubelet[1465]: I0327 19:04:44.627818    1465 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5pwjb\" (UniqueName: \"kubernetes.io/projected/a2a7db0c-b3a4-491d-bef5-73febdd9a49a-kube-api-access-5pwjb\") on node \"addons-408183\" DevicePath \"\""
	Mar 27 19:04:44 addons-408183 kubelet[1465]: I0327 19:04:44.688930    1465 scope.go:117] "RemoveContainer" containerID="e4d675d4eff1f5f880e6ca49e1ef935f79f74121bd10455e31512e46667c62f7"
	Mar 27 19:04:44 addons-408183 kubelet[1465]: I0327 19:04:44.952533    1465 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2a7db0c-b3a4-491d-bef5-73febdd9a49a" path="/var/lib/kubelet/pods/a2a7db0c-b3a4-491d-bef5-73febdd9a49a/volumes"
	Mar 27 19:04:45 addons-408183 kubelet[1465]: E0327 19:04:45.540655    1465 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/8c68a310e1f020f0c2fffb2b124d3fd9724a0bf0dda168b92b2ef6a1928af7dd/diff" to get inode usage: stat /var/lib/containers/storage/overlay/8c68a310e1f020f0c2fffb2b124d3fd9724a0bf0dda168b92b2ef6a1928af7dd/diff: no such file or directory, extraDiskErr: <nil>
	Mar 27 19:04:45 addons-408183 kubelet[1465]: I0327 19:04:45.951304    1465 scope.go:117] "RemoveContainer" containerID="e39a659f2114d714d909dd6c459fc7833076cbe1e748b6f3d3b2f99b2a0fef59"
	Mar 27 19:04:46 addons-408183 kubelet[1465]: I0327 19:04:46.696352    1465 scope.go:117] "RemoveContainer" containerID="e39a659f2114d714d909dd6c459fc7833076cbe1e748b6f3d3b2f99b2a0fef59"
	Mar 27 19:04:46 addons-408183 kubelet[1465]: I0327 19:04:46.696605    1465 scope.go:117] "RemoveContainer" containerID="f04f6789afe9be331be36c2d1c5a7334a030a231c0774da15961e9068eb08589"
	Mar 27 19:04:46 addons-408183 kubelet[1465]: E0327 19:04:46.696863    1465 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-tbdqv_default(611bf69c-57bd-4c02-b83a-2b471467b520)\"" pod="default/hello-world-app-5d77478584-tbdqv" podUID="611bf69c-57bd-4c02-b83a-2b471467b520"
	Mar 27 19:04:46 addons-408183 kubelet[1465]: I0327 19:04:46.952375    1465 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8228583c-a1ca-4bc7-98cd-a119c5280faa" path="/var/lib/kubelet/pods/8228583c-a1ca-4bc7-98cd-a119c5280faa/volumes"
	Mar 27 19:04:46 addons-408183 kubelet[1465]: I0327 19:04:46.952819    1465 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83794d7d-39f0-4876-875e-4585aaa61dff" path="/var/lib/kubelet/pods/83794d7d-39f0-4876-875e-4585aaa61dff/volumes"
	Mar 27 19:04:48 addons-408183 kubelet[1465]: I0327 19:04:48.705111    1465 scope.go:117] "RemoveContainer" containerID="655b3f3bb8c31c486c3e40b35fbcd9df65ceb9438a649d63c4a6389af59673c7"
	Mar 27 19:04:48 addons-408183 kubelet[1465]: I0327 19:04:48.723172    1465 scope.go:117] "RemoveContainer" containerID="655b3f3bb8c31c486c3e40b35fbcd9df65ceb9438a649d63c4a6389af59673c7"
	Mar 27 19:04:48 addons-408183 kubelet[1465]: E0327 19:04:48.723660    1465 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"655b3f3bb8c31c486c3e40b35fbcd9df65ceb9438a649d63c4a6389af59673c7\": container with ID starting with 655b3f3bb8c31c486c3e40b35fbcd9df65ceb9438a649d63c4a6389af59673c7 not found: ID does not exist" containerID="655b3f3bb8c31c486c3e40b35fbcd9df65ceb9438a649d63c4a6389af59673c7"
	Mar 27 19:04:48 addons-408183 kubelet[1465]: I0327 19:04:48.723711    1465 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"655b3f3bb8c31c486c3e40b35fbcd9df65ceb9438a649d63c4a6389af59673c7"} err="failed to get container status \"655b3f3bb8c31c486c3e40b35fbcd9df65ceb9438a649d63c4a6389af59673c7\": rpc error: code = NotFound desc = could not find container \"655b3f3bb8c31c486c3e40b35fbcd9df65ceb9438a649d63c4a6389af59673c7\": container with ID starting with 655b3f3bb8c31c486c3e40b35fbcd9df65ceb9438a649d63c4a6389af59673c7 not found: ID does not exist"
	Mar 27 19:04:48 addons-408183 kubelet[1465]: I0327 19:04:48.866640    1465 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/662ea9ea-ba90-40c0-8979-14aa182883ff-webhook-cert\") pod \"662ea9ea-ba90-40c0-8979-14aa182883ff\" (UID: \"662ea9ea-ba90-40c0-8979-14aa182883ff\") "
	Mar 27 19:04:48 addons-408183 kubelet[1465]: I0327 19:04:48.866713    1465 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djwvk\" (UniqueName: \"kubernetes.io/projected/662ea9ea-ba90-40c0-8979-14aa182883ff-kube-api-access-djwvk\") pod \"662ea9ea-ba90-40c0-8979-14aa182883ff\" (UID: \"662ea9ea-ba90-40c0-8979-14aa182883ff\") "
	Mar 27 19:04:48 addons-408183 kubelet[1465]: I0327 19:04:48.868651    1465 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/662ea9ea-ba90-40c0-8979-14aa182883ff-kube-api-access-djwvk" (OuterVolumeSpecName: "kube-api-access-djwvk") pod "662ea9ea-ba90-40c0-8979-14aa182883ff" (UID: "662ea9ea-ba90-40c0-8979-14aa182883ff"). InnerVolumeSpecName "kube-api-access-djwvk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 27 19:04:48 addons-408183 kubelet[1465]: I0327 19:04:48.869297    1465 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/662ea9ea-ba90-40c0-8979-14aa182883ff-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "662ea9ea-ba90-40c0-8979-14aa182883ff" (UID: "662ea9ea-ba90-40c0-8979-14aa182883ff"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Mar 27 19:04:48 addons-408183 kubelet[1465]: I0327 19:04:48.952124    1465 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="662ea9ea-ba90-40c0-8979-14aa182883ff" path="/var/lib/kubelet/pods/662ea9ea-ba90-40c0-8979-14aa182883ff/volumes"
	Mar 27 19:04:48 addons-408183 kubelet[1465]: I0327 19:04:48.967892    1465 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-djwvk\" (UniqueName: \"kubernetes.io/projected/662ea9ea-ba90-40c0-8979-14aa182883ff-kube-api-access-djwvk\") on node \"addons-408183\" DevicePath \"\""
	Mar 27 19:04:48 addons-408183 kubelet[1465]: I0327 19:04:48.967944    1465 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/662ea9ea-ba90-40c0-8979-14aa182883ff-webhook-cert\") on node \"addons-408183\" DevicePath \"\""
	
	
	==> storage-provisioner [394bfe3d7c1968e1e00234466b638de50e01956f3445e542f6a55ff561f79f33] <==
	I0327 19:00:24.688334       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0327 19:00:24.766498       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0327 19:00:24.766721       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0327 19:00:24.841523       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0327 19:00:24.841808       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-408183_5fec65f0-0b19-4b48-8b03-f22128c30947!
	I0327 19:00:24.842292       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"06f8d3bb-7086-4326-a3a2-e4fa805ab71e", APIVersion:"v1", ResourceVersion:"911", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-408183_5fec65f0-0b19-4b48-8b03-f22128c30947 became leader
	I0327 19:00:24.942560       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-408183_5fec65f0-0b19-4b48-8b03-f22128c30947!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-408183 -n addons-408183
helpers_test.go:261: (dbg) Run:  kubectl --context addons-408183 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (166.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (124.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-738145 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0327 19:19:17.458279  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/functional-990825/client.crt: no such file or directory
E0327 19:19:45.143299  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/functional-990825/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-738145 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m59.533481684s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:589: expected 3 nodes to be Ready, got 
-- stdout --
	NAME            STATUS     ROLES           AGE     VERSION
	ha-738145       NotReady   control-plane   9m58s   v1.29.3
	ha-738145-m02   Ready      control-plane   9m22s   v1.29.3
	ha-738145-m04   Ready      <none>          7m23s   v1.29.3

                                                
                                                
-- /stdout --
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:597: expected 3 nodes Ready status to be True, got 
-- stdout --
	' Unknown
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-738145
helpers_test.go:235: (dbg) docker inspect ha-738145:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cac7717827e81ebff1eb8d7b8f9bcd5bba52cdc28d6814b6d143c2b945b6588f",
	        "Created": "2024-03-27T19:10:28.679731312Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 624380,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-27T19:18:49.833128642Z",
	            "FinishedAt": "2024-03-27T19:18:48.976805028Z"
	        },
	        "Image": "sha256:f9b5358e8c18dbe49e632154cad75e0968b2e103f621caff2c3ed996f4155861",
	        "ResolvConfPath": "/var/lib/docker/containers/cac7717827e81ebff1eb8d7b8f9bcd5bba52cdc28d6814b6d143c2b945b6588f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cac7717827e81ebff1eb8d7b8f9bcd5bba52cdc28d6814b6d143c2b945b6588f/hostname",
	        "HostsPath": "/var/lib/docker/containers/cac7717827e81ebff1eb8d7b8f9bcd5bba52cdc28d6814b6d143c2b945b6588f/hosts",
	        "LogPath": "/var/lib/docker/containers/cac7717827e81ebff1eb8d7b8f9bcd5bba52cdc28d6814b6d143c2b945b6588f/cac7717827e81ebff1eb8d7b8f9bcd5bba52cdc28d6814b6d143c2b945b6588f-json.log",
	        "Name": "/ha-738145",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-738145:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-738145",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f1c5a2b6f1996c7625156f67e24a176e0d4a044328ff8c305a11ee7bfcd43fb8-init/diff:/var/lib/docker/overlay2/035f6eff93a34b4eb6fc7c3d7c8227de09cbceaeca4dc81b78c663243a30a00f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f1c5a2b6f1996c7625156f67e24a176e0d4a044328ff8c305a11ee7bfcd43fb8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f1c5a2b6f1996c7625156f67e24a176e0d4a044328ff8c305a11ee7bfcd43fb8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f1c5a2b6f1996c7625156f67e24a176e0d4a044328ff8c305a11ee7bfcd43fb8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-738145",
	                "Source": "/var/lib/docker/volumes/ha-738145/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-738145",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-738145",
	                "name.minikube.sigs.k8s.io": "ha-738145",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "73257251849c4315ed9dd5ab09a4d0e775259a9e3a7288586426248b24fb6bec",
	            "SandboxKey": "/var/run/docker/netns/73257251849c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33578"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33577"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33574"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33576"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33575"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-738145": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "4d2482224fd212a97a120e7f55b0b3536338359ebac2fad552df895fc8294778",
	                    "EndpointID": "76a8717525bdbb74f627a55de488ae103d0692669dd5f9ad50d6c9c6c2e969bf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ha-738145",
	                        "cac7717827e8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-738145 -n ha-738145
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ha-738145 logs -n 25: (1.89196059s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| cp      | ha-738145 cp ha-738145-m03:/home/docker/cp-test.txt                              | ha-738145 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:14 UTC | 27 Mar 24 19:14 UTC |
	|         | ha-738145-m04:/home/docker/cp-test_ha-738145-m03_ha-738145-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-738145 ssh -n                                                                 | ha-738145 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:14 UTC | 27 Mar 24 19:14 UTC |
	|         | ha-738145-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-738145 ssh -n ha-738145-m04 sudo cat                                          | ha-738145 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:14 UTC | 27 Mar 24 19:14 UTC |
	|         | /home/docker/cp-test_ha-738145-m03_ha-738145-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-738145 cp testdata/cp-test.txt                                                | ha-738145 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:14 UTC | 27 Mar 24 19:14 UTC |
	|         | ha-738145-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-738145 ssh -n                                                                 | ha-738145 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:14 UTC | 27 Mar 24 19:14 UTC |
	|         | ha-738145-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-738145 cp ha-738145-m04:/home/docker/cp-test.txt                              | ha-738145 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:14 UTC | 27 Mar 24 19:14 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3502835903/001/cp-test_ha-738145-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-738145 ssh -n                                                                 | ha-738145 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:14 UTC | 27 Mar 24 19:14 UTC |
	|         | ha-738145-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-738145 cp ha-738145-m04:/home/docker/cp-test.txt                              | ha-738145 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:14 UTC | 27 Mar 24 19:14 UTC |
	|         | ha-738145:/home/docker/cp-test_ha-738145-m04_ha-738145.txt                       |           |         |                |                     |                     |
	| ssh     | ha-738145 ssh -n                                                                 | ha-738145 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:14 UTC | 27 Mar 24 19:14 UTC |
	|         | ha-738145-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-738145 ssh -n ha-738145 sudo cat                                              | ha-738145 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:14 UTC | 27 Mar 24 19:14 UTC |
	|         | /home/docker/cp-test_ha-738145-m04_ha-738145.txt                                 |           |         |                |                     |                     |
	| cp      | ha-738145 cp ha-738145-m04:/home/docker/cp-test.txt                              | ha-738145 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:14 UTC | 27 Mar 24 19:14 UTC |
	|         | ha-738145-m02:/home/docker/cp-test_ha-738145-m04_ha-738145-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-738145 ssh -n                                                                 | ha-738145 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:14 UTC | 27 Mar 24 19:14 UTC |
	|         | ha-738145-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-738145 ssh -n ha-738145-m02 sudo cat                                          | ha-738145 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:14 UTC | 27 Mar 24 19:14 UTC |
	|         | /home/docker/cp-test_ha-738145-m04_ha-738145-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-738145 cp ha-738145-m04:/home/docker/cp-test.txt                              | ha-738145 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:14 UTC | 27 Mar 24 19:14 UTC |
	|         | ha-738145-m03:/home/docker/cp-test_ha-738145-m04_ha-738145-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-738145 ssh -n                                                                 | ha-738145 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:14 UTC | 27 Mar 24 19:14 UTC |
	|         | ha-738145-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-738145 ssh -n ha-738145-m03 sudo cat                                          | ha-738145 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:14 UTC | 27 Mar 24 19:14 UTC |
	|         | /home/docker/cp-test_ha-738145-m04_ha-738145-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-738145 node stop m02 -v=7                                                     | ha-738145 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:14 UTC | 27 Mar 24 19:14 UTC |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | ha-738145 node start m02 -v=7                                                    | ha-738145 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:14 UTC | 27 Mar 24 19:15 UTC |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-738145 -v=7                                                           | ha-738145 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| stop    | -p ha-738145 -v=7                                                                | ha-738145 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:15 UTC | 27 Mar 24 19:15 UTC |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| start   | -p ha-738145 --wait=true -v=7                                                    | ha-738145 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:15 UTC | 27 Mar 24 19:17 UTC |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-738145                                                                | ha-738145 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:18 UTC |                     |
	| node    | ha-738145 node delete m03 -v=7                                                   | ha-738145 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:18 UTC | 27 Mar 24 19:18 UTC |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| stop    | ha-738145 stop -v=7                                                              | ha-738145 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:18 UTC | 27 Mar 24 19:18 UTC |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| start   | -p ha-738145 --wait=true                                                         | ha-738145 | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:18 UTC | 27 Mar 24 19:20 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |                |                     |                     |
	|         | --driver=docker                                                                  |           |         |                |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |                |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 19:18:49
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 19:18:49.381665  624194 out.go:291] Setting OutFile to fd 1 ...
	I0327 19:18:49.381888  624194 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:18:49.381929  624194 out.go:304] Setting ErrFile to fd 2...
	I0327 19:18:49.381942  624194 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:18:49.382217  624194 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-562206/.minikube/bin
	I0327 19:18:49.382628  624194 out.go:298] Setting JSON to false
	I0327 19:18:49.383539  624194 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10867,"bootTime":1711556262,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0327 19:18:49.383661  624194 start.go:139] virtualization:  
	I0327 19:18:49.386632  624194 out.go:177] * [ha-738145] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0327 19:18:49.389213  624194 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 19:18:49.391138  624194 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 19:18:49.389344  624194 notify.go:220] Checking for updates...
	I0327 19:18:49.394901  624194 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18517-562206/kubeconfig
	I0327 19:18:49.396889  624194 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18517-562206/.minikube
	I0327 19:18:49.398966  624194 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0327 19:18:49.400950  624194 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 19:18:49.403391  624194 config.go:182] Loaded profile config "ha-738145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 19:18:49.403906  624194 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 19:18:49.423951  624194 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0327 19:18:49.424071  624194 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 19:18:49.490700  624194 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:42 SystemTime:2024-03-27 19:18:49.481437987 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 19:18:49.490819  624194 docker.go:295] overlay module found
	I0327 19:18:49.493166  624194 out.go:177] * Using the docker driver based on existing profile
	I0327 19:18:49.495088  624194 start.go:297] selected driver: docker
	I0327 19:18:49.495106  624194 start.go:901] validating driver "docker" against &{Name:ha-738145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-738145 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logvi
ewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 19:18:49.495281  624194 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 19:18:49.495387  624194 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 19:18:49.552828  624194 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:42 SystemTime:2024-03-27 19:18:49.540211587 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 19:18:49.553262  624194 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 19:18:49.553293  624194 cni.go:84] Creating CNI manager for ""
	I0327 19:18:49.553301  624194 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0327 19:18:49.553349  624194 start.go:340] cluster config:
	{Name:ha-738145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-738145 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvi
dia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 19:18:49.555806  624194 out.go:177] * Starting "ha-738145" primary control-plane node in "ha-738145" cluster
	I0327 19:18:49.557472  624194 cache.go:121] Beginning downloading kic base image for docker with crio
	I0327 19:18:49.559240  624194 out.go:177] * Pulling base image v0.0.43-beta.0 ...
	I0327 19:18:49.561334  624194 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0327 19:18:49.561394  624194 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18517-562206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-arm64.tar.lz4
	I0327 19:18:49.561403  624194 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local docker daemon
	I0327 19:18:49.561408  624194 cache.go:56] Caching tarball of preloaded images
	I0327 19:18:49.561578  624194 preload.go:173] Found /home/jenkins/minikube-integration/18517-562206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0327 19:18:49.561586  624194 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0327 19:18:49.561730  624194 profile.go:142] Saving config to /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/config.json ...
	I0327 19:18:49.575108  624194 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local docker daemon, skipping pull
	I0327 19:18:49.575138  624194 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 exists in daemon, skipping load
	I0327 19:18:49.575161  624194 cache.go:194] Successfully downloaded all kic artifacts
	I0327 19:18:49.575189  624194 start.go:360] acquireMachinesLock for ha-738145: {Name:mkc6b60bc4de2c929039606d08a51e5c7c488d00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 19:18:49.575258  624194 start.go:364] duration metric: took 41.961µs to acquireMachinesLock for "ha-738145"
	I0327 19:18:49.575283  624194 start.go:96] Skipping create...Using existing machine configuration
	I0327 19:18:49.575293  624194 fix.go:54] fixHost starting: 
	I0327 19:18:49.575572  624194 cli_runner.go:164] Run: docker container inspect ha-738145 --format={{.State.Status}}
	I0327 19:18:49.590109  624194 fix.go:112] recreateIfNeeded on ha-738145: state=Stopped err=<nil>
	W0327 19:18:49.590138  624194 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 19:18:49.592566  624194 out.go:177] * Restarting existing docker container for "ha-738145" ...
	I0327 19:18:49.594834  624194 cli_runner.go:164] Run: docker start ha-738145
	I0327 19:18:49.840552  624194 cli_runner.go:164] Run: docker container inspect ha-738145 --format={{.State.Status}}
	I0327 19:18:49.861970  624194 kic.go:430] container "ha-738145" state is running.
	I0327 19:18:49.862805  624194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-738145
	I0327 19:18:49.883602  624194 profile.go:142] Saving config to /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/config.json ...
	I0327 19:18:49.883850  624194 machine.go:94] provisionDockerMachine start ...
	I0327 19:18:49.883923  624194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145
	I0327 19:18:49.902837  624194 main.go:141] libmachine: Using SSH client type: native
	I0327 19:18:49.903252  624194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33578 <nil> <nil>}
	I0327 19:18:49.903266  624194 main.go:141] libmachine: About to run SSH command:
	hostname
	I0327 19:18:49.903964  624194 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0327 19:18:53.025315  624194 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-738145
	
	I0327 19:18:53.025340  624194 ubuntu.go:169] provisioning hostname "ha-738145"
	I0327 19:18:53.025427  624194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145
	I0327 19:18:53.042979  624194 main.go:141] libmachine: Using SSH client type: native
	I0327 19:18:53.043238  624194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33578 <nil> <nil>}
	I0327 19:18:53.043255  624194 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-738145 && echo "ha-738145" | sudo tee /etc/hostname
	I0327 19:18:53.177455  624194 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-738145
	
	I0327 19:18:53.177555  624194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145
	I0327 19:18:53.193173  624194 main.go:141] libmachine: Using SSH client type: native
	I0327 19:18:53.193420  624194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33578 <nil> <nil>}
	I0327 19:18:53.193441  624194 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-738145' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-738145/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-738145' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0327 19:18:53.313813  624194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 19:18:53.313840  624194 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18517-562206/.minikube CaCertPath:/home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18517-562206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18517-562206/.minikube}
	I0327 19:18:53.313869  624194 ubuntu.go:177] setting up certificates
	I0327 19:18:53.313882  624194 provision.go:84] configureAuth start
	I0327 19:18:53.313961  624194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-738145
	I0327 19:18:53.329735  624194 provision.go:143] copyHostCerts
	I0327 19:18:53.329780  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18517-562206/.minikube/ca.pem
	I0327 19:18:53.329814  624194 exec_runner.go:144] found /home/jenkins/minikube-integration/18517-562206/.minikube/ca.pem, removing ...
	I0327 19:18:53.329824  624194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18517-562206/.minikube/ca.pem
	I0327 19:18:53.329918  624194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18517-562206/.minikube/ca.pem (1082 bytes)
	I0327 19:18:53.330012  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18517-562206/.minikube/cert.pem
	I0327 19:18:53.330037  624194 exec_runner.go:144] found /home/jenkins/minikube-integration/18517-562206/.minikube/cert.pem, removing ...
	I0327 19:18:53.330047  624194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18517-562206/.minikube/cert.pem
	I0327 19:18:53.330078  624194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18517-562206/.minikube/cert.pem (1123 bytes)
	I0327 19:18:53.330129  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18517-562206/.minikube/key.pem
	I0327 19:18:53.330151  624194 exec_runner.go:144] found /home/jenkins/minikube-integration/18517-562206/.minikube/key.pem, removing ...
	I0327 19:18:53.330158  624194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18517-562206/.minikube/key.pem
	I0327 19:18:53.330185  624194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18517-562206/.minikube/key.pem (1679 bytes)
	I0327 19:18:53.330237  624194 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18517-562206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca-key.pem org=jenkins.ha-738145 san=[127.0.0.1 192.168.49.2 ha-738145 localhost minikube]
	I0327 19:18:53.690951  624194 provision.go:177] copyRemoteCerts
	I0327 19:18:53.691026  624194 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0327 19:18:53.691069  624194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145
	I0327 19:18:53.706064  624194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33578 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/ha-738145/id_rsa Username:docker}
	I0327 19:18:53.794517  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0327 19:18:53.794575  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0327 19:18:53.817886  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0327 19:18:53.818016  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0327 19:18:53.841691  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0327 19:18:53.841756  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0327 19:18:53.866020  624194 provision.go:87] duration metric: took 552.120436ms to configureAuth
	I0327 19:18:53.866063  624194 ubuntu.go:193] setting minikube options for container-runtime
	I0327 19:18:53.866296  624194 config.go:182] Loaded profile config "ha-738145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 19:18:53.866423  624194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145
	I0327 19:18:53.880255  624194 main.go:141] libmachine: Using SSH client type: native
	I0327 19:18:53.880501  624194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33578 <nil> <nil>}
	I0327 19:18:53.880523  624194 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0327 19:18:54.260976  624194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0327 19:18:54.260998  624194 machine.go:97] duration metric: took 4.377137246s to provisionDockerMachine
	I0327 19:18:54.261009  624194 start.go:293] postStartSetup for "ha-738145" (driver="docker")
	I0327 19:18:54.261020  624194 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0327 19:18:54.261088  624194 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0327 19:18:54.261158  624194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145
	I0327 19:18:54.280618  624194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33578 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/ha-738145/id_rsa Username:docker}
	I0327 19:18:54.370750  624194 ssh_runner.go:195] Run: cat /etc/os-release
	I0327 19:18:54.373944  624194 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0327 19:18:54.373983  624194 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0327 19:18:54.373994  624194 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0327 19:18:54.374001  624194 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0327 19:18:54.374012  624194 filesync.go:126] Scanning /home/jenkins/minikube-integration/18517-562206/.minikube/addons for local assets ...
	I0327 19:18:54.374073  624194 filesync.go:126] Scanning /home/jenkins/minikube-integration/18517-562206/.minikube/files for local assets ...
	I0327 19:18:54.374159  624194 filesync.go:149] local asset: /home/jenkins/minikube-integration/18517-562206/.minikube/files/etc/ssl/certs/5676232.pem -> 5676232.pem in /etc/ssl/certs
	I0327 19:18:54.374173  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/files/etc/ssl/certs/5676232.pem -> /etc/ssl/certs/5676232.pem
	I0327 19:18:54.374276  624194 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0327 19:18:54.382791  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/files/etc/ssl/certs/5676232.pem --> /etc/ssl/certs/5676232.pem (1708 bytes)
	I0327 19:18:54.409396  624194 start.go:296] duration metric: took 148.371693ms for postStartSetup
	I0327 19:18:54.409502  624194 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0327 19:18:54.409559  624194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145
	I0327 19:18:54.425604  624194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33578 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/ha-738145/id_rsa Username:docker}
	I0327 19:18:54.514737  624194 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0327 19:18:54.519111  624194 fix.go:56] duration metric: took 4.94381005s for fixHost
	I0327 19:18:54.519139  624194 start.go:83] releasing machines lock for "ha-738145", held for 4.943867059s
	I0327 19:18:54.519213  624194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-738145
	I0327 19:18:54.533753  624194 ssh_runner.go:195] Run: cat /version.json
	I0327 19:18:54.533813  624194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145
	I0327 19:18:54.534132  624194 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0327 19:18:54.534174  624194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145
	I0327 19:18:54.552882  624194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33578 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/ha-738145/id_rsa Username:docker}
	I0327 19:18:54.554005  624194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33578 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/ha-738145/id_rsa Username:docker}
	I0327 19:18:54.637271  624194 ssh_runner.go:195] Run: systemctl --version
	I0327 19:18:54.751104  624194 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0327 19:18:54.889117  624194 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0327 19:18:54.893440  624194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0327 19:18:54.902133  624194 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0327 19:18:54.902219  624194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0327 19:18:54.911027  624194 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0327 19:18:54.911050  624194 start.go:494] detecting cgroup driver to use...
	I0327 19:18:54.911085  624194 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0327 19:18:54.911132  624194 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0327 19:18:54.923308  624194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 19:18:54.934781  624194 docker.go:217] disabling cri-docker service (if available) ...
	I0327 19:18:54.934872  624194 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0327 19:18:54.947451  624194 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0327 19:18:54.958741  624194 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0327 19:18:55.050131  624194 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0327 19:18:55.140831  624194 docker.go:233] disabling docker service ...
	I0327 19:18:55.140938  624194 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0327 19:18:55.154478  624194 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0327 19:18:55.167208  624194 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0327 19:18:55.249146  624194 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0327 19:18:55.329930  624194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0327 19:18:55.344117  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 19:18:55.360288  624194 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0327 19:18:55.360392  624194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 19:18:55.370395  624194 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0327 19:18:55.370513  624194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 19:18:55.380470  624194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 19:18:55.390568  624194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 19:18:55.400473  624194 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0327 19:18:55.409987  624194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 19:18:55.420066  624194 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 19:18:55.429808  624194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 19:18:55.440342  624194 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0327 19:18:55.448957  624194 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0327 19:18:55.457122  624194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 19:18:55.545201  624194 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0327 19:18:55.665803  624194 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0327 19:18:55.665867  624194 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0327 19:18:55.669331  624194 start.go:562] Will wait 60s for crictl version
	I0327 19:18:55.669391  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:18:55.672924  624194 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0327 19:18:55.710564  624194 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0327 19:18:55.710708  624194 ssh_runner.go:195] Run: crio --version
	I0327 19:18:55.752686  624194 ssh_runner.go:195] Run: crio --version
	I0327 19:18:55.795969  624194 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.24.6 ...
	I0327 19:18:55.797589  624194 cli_runner.go:164] Run: docker network inspect ha-738145 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0327 19:18:55.810991  624194 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0327 19:18:55.814471  624194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 19:18:55.824996  624194 kubeadm.go:877] updating cluster {Name:ha-738145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-738145 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false me
tallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0327 19:18:55.825164  624194 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0327 19:18:55.825248  624194 ssh_runner.go:195] Run: sudo crictl images --output json
	I0327 19:18:55.868845  624194 crio.go:514] all images are preloaded for cri-o runtime.
	I0327 19:18:55.868870  624194 crio.go:433] Images already preloaded, skipping extraction
	I0327 19:18:55.868923  624194 ssh_runner.go:195] Run: sudo crictl images --output json
	I0327 19:18:55.905932  624194 crio.go:514] all images are preloaded for cri-o runtime.
	I0327 19:18:55.905956  624194 cache_images.go:84] Images are preloaded, skipping loading
	I0327 19:18:55.905966  624194 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.29.3 crio true true} ...
	I0327 19:18:55.906073  624194 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-738145 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-738145 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0327 19:18:55.906155  624194 ssh_runner.go:195] Run: crio config
	I0327 19:18:55.961822  624194 cni.go:84] Creating CNI manager for ""
	I0327 19:18:55.961843  624194 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0327 19:18:55.961853  624194 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0327 19:18:55.961894  624194 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-738145 NodeName:ha-738145 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0327 19:18:55.962063  624194 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-738145"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0327 19:18:55.962086  624194 kube-vip.go:111] generating kube-vip config ...
	I0327 19:18:55.962141  624194 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0327 19:18:55.974357  624194 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0327 19:18:55.974482  624194 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0327 19:18:55.974550  624194 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0327 19:18:55.983067  624194 binaries.go:44] Found k8s binaries, skipping transfer
	I0327 19:18:55.983143  624194 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0327 19:18:55.991535  624194 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0327 19:18:56.011432  624194 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0327 19:18:56.031167  624194 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0327 19:18:56.049891  624194 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0327 19:18:56.070098  624194 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0327 19:18:56.073698  624194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 19:18:56.084881  624194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 19:18:56.181284  624194 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 19:18:56.194930  624194 certs.go:68] Setting up /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145 for IP: 192.168.49.2
	I0327 19:18:56.194952  624194 certs.go:194] generating shared ca certs ...
	I0327 19:18:56.194973  624194 certs.go:226] acquiring lock for ca certs: {Name:mk95afc777a0fafcf19d589f4cbc5a374d1fe472 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:18:56.195125  624194 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18517-562206/.minikube/ca.key
	I0327 19:18:56.195173  624194 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18517-562206/.minikube/proxy-client-ca.key
	I0327 19:18:56.195185  624194 certs.go:256] generating profile certs ...
	I0327 19:18:56.195258  624194 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/client.key
	I0327 19:18:56.195293  624194 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/apiserver.key.baaceb51
	I0327 19:18:56.195316  624194 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/apiserver.crt.baaceb51 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0327 19:18:56.416494  624194 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/apiserver.crt.baaceb51 ...
	I0327 19:18:56.416523  624194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/apiserver.crt.baaceb51: {Name:mk3be2dd8a88a883ee595aaf9b30c73571e994ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:18:56.416701  624194 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/apiserver.key.baaceb51 ...
	I0327 19:18:56.416719  624194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/apiserver.key.baaceb51: {Name:mk6ed298f8dda4484e5a34a0b58120e176f98a81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:18:56.416802  624194 certs.go:381] copying /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/apiserver.crt.baaceb51 -> /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/apiserver.crt
	I0327 19:18:56.416943  624194 certs.go:385] copying /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/apiserver.key.baaceb51 -> /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/apiserver.key
	I0327 19:18:56.417087  624194 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/proxy-client.key
	I0327 19:18:56.417106  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0327 19:18:56.417121  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0327 19:18:56.417148  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0327 19:18:56.417170  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0327 19:18:56.417186  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0327 19:18:56.417197  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0327 19:18:56.417211  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0327 19:18:56.417227  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0327 19:18:56.417283  624194 certs.go:484] found cert: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/567623.pem (1338 bytes)
	W0327 19:18:56.417316  624194 certs.go:480] ignoring /home/jenkins/minikube-integration/18517-562206/.minikube/certs/567623_empty.pem, impossibly tiny 0 bytes
	I0327 19:18:56.417328  624194 certs.go:484] found cert: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca-key.pem (1679 bytes)
	I0327 19:18:56.417353  624194 certs.go:484] found cert: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca.pem (1082 bytes)
	I0327 19:18:56.417379  624194 certs.go:484] found cert: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/cert.pem (1123 bytes)
	I0327 19:18:56.417402  624194 certs.go:484] found cert: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/key.pem (1679 bytes)
	I0327 19:18:56.417453  624194 certs.go:484] found cert: /home/jenkins/minikube-integration/18517-562206/.minikube/files/etc/ssl/certs/5676232.pem (1708 bytes)
	I0327 19:18:56.417488  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/567623.pem -> /usr/share/ca-certificates/567623.pem
	I0327 19:18:56.417506  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/files/etc/ssl/certs/5676232.pem -> /usr/share/ca-certificates/5676232.pem
	I0327 19:18:56.417522  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0327 19:18:56.418359  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0327 19:18:56.445738  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0327 19:18:56.469728  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0327 19:18:56.493441  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0327 19:18:56.517316  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0327 19:18:56.542669  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0327 19:18:56.567295  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0327 19:18:56.590353  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0327 19:18:56.613011  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/certs/567623.pem --> /usr/share/ca-certificates/567623.pem (1338 bytes)
	I0327 19:18:56.639672  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/files/etc/ssl/certs/5676232.pem --> /usr/share/ca-certificates/5676232.pem (1708 bytes)
	I0327 19:18:56.663846  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0327 19:18:56.687821  624194 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0327 19:18:56.705130  624194 ssh_runner.go:195] Run: openssl version
	I0327 19:18:56.710439  624194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/567623.pem && ln -fs /usr/share/ca-certificates/567623.pem /etc/ssl/certs/567623.pem"
	I0327 19:18:56.719628  624194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/567623.pem
	I0327 19:18:56.723180  624194 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 19:06 /usr/share/ca-certificates/567623.pem
	I0327 19:18:56.723245  624194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/567623.pem
	I0327 19:18:56.730457  624194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/567623.pem /etc/ssl/certs/51391683.0"
	I0327 19:18:56.739116  624194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5676232.pem && ln -fs /usr/share/ca-certificates/5676232.pem /etc/ssl/certs/5676232.pem"
	I0327 19:18:56.748330  624194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5676232.pem
	I0327 19:18:56.751788  624194 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 19:06 /usr/share/ca-certificates/5676232.pem
	I0327 19:18:56.751903  624194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5676232.pem
	I0327 19:18:56.758644  624194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5676232.pem /etc/ssl/certs/3ec20f2e.0"
	I0327 19:18:56.767204  624194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0327 19:18:56.776082  624194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0327 19:18:56.779622  624194 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 18:59 /usr/share/ca-certificates/minikubeCA.pem
	I0327 19:18:56.779685  624194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0327 19:18:56.786564  624194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0327 19:18:56.794967  624194 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0327 19:18:56.798314  624194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0327 19:18:56.804881  624194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0327 19:18:56.811516  624194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0327 19:18:56.818217  624194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0327 19:18:56.824752  624194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0327 19:18:56.831526  624194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0327 19:18:56.838307  624194 kubeadm.go:391] StartCluster: {Name:ha-738145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-738145 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metal
lb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 19:18:56.838445  624194 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0327 19:18:56.838537  624194 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0327 19:18:56.875816  624194 cri.go:89] found id: ""
	I0327 19:18:56.875903  624194 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0327 19:18:56.884446  624194 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0327 19:18:56.884468  624194 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0327 19:18:56.884473  624194 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0327 19:18:56.884529  624194 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0327 19:18:56.892492  624194 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0327 19:18:56.892922  624194 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-738145" does not appear in /home/jenkins/minikube-integration/18517-562206/kubeconfig
	I0327 19:18:56.893034  624194 kubeconfig.go:62] /home/jenkins/minikube-integration/18517-562206/kubeconfig needs updating (will repair): [kubeconfig missing "ha-738145" cluster setting kubeconfig missing "ha-738145" context setting]
	I0327 19:18:56.893326  624194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-562206/kubeconfig: {Name:mk1481518c17ad7c54533eeb54c75c7968328394 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:18:56.893700  624194 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18517-562206/kubeconfig
	I0327 19:18:56.893960  624194 kapi.go:59] client config for ha-738145: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/client.crt", KeyFile:"/home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/client.key", CAFile:"/home/jenkins/minikube-integration/18517-562206/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1700360), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0327 19:18:56.894562  624194 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0327 19:18:56.894710  624194 cert_rotation.go:137] Starting client certificate rotation controller
	I0327 19:18:56.903029  624194 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.49.2
	I0327 19:18:56.903051  624194 kubeadm.go:591] duration metric: took 18.572563ms to restartPrimaryControlPlane
	I0327 19:18:56.903060  624194 kubeadm.go:393] duration metric: took 64.762236ms to StartCluster
	I0327 19:18:56.903075  624194 settings.go:142] acquiring lock: {Name:mkffcd59f6abeb2b3cc53bb555eb7fb5f175c67e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:18:56.903139  624194 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18517-562206/kubeconfig
	I0327 19:18:56.903739  624194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-562206/kubeconfig: {Name:mk1481518c17ad7c54533eeb54c75c7968328394 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:18:56.903929  624194 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0327 19:18:56.903956  624194 start.go:240] waiting for startup goroutines ...
	I0327 19:18:56.903964  624194 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0327 19:18:56.907003  624194 out.go:177] * Enabled addons: 
	I0327 19:18:56.904337  624194 config.go:182] Loaded profile config "ha-738145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 19:18:56.908769  624194 addons.go:505] duration metric: took 4.801968ms for enable addons: enabled=[]
	I0327 19:18:56.908793  624194 start.go:245] waiting for cluster config update ...
	I0327 19:18:56.908805  624194 start.go:254] writing updated cluster config ...
	I0327 19:18:56.910598  624194 out.go:177] 
	I0327 19:18:56.914334  624194 config.go:182] Loaded profile config "ha-738145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 19:18:56.914441  624194 profile.go:142] Saving config to /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/config.json ...
	I0327 19:18:56.917036  624194 out.go:177] * Starting "ha-738145-m02" control-plane node in "ha-738145" cluster
	I0327 19:18:56.919086  624194 cache.go:121] Beginning downloading kic base image for docker with crio
	I0327 19:18:56.920774  624194 out.go:177] * Pulling base image v0.0.43-beta.0 ...
	I0327 19:18:56.922398  624194 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0327 19:18:56.922427  624194 cache.go:56] Caching tarball of preloaded images
	I0327 19:18:56.922472  624194 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local docker daemon
	I0327 19:18:56.922573  624194 preload.go:173] Found /home/jenkins/minikube-integration/18517-562206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0327 19:18:56.922589  624194 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0327 19:18:56.922747  624194 profile.go:142] Saving config to /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/config.json ...
	I0327 19:18:56.936978  624194 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local docker daemon, skipping pull
	I0327 19:18:56.937004  624194 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 exists in daemon, skipping load
	I0327 19:18:56.937018  624194 cache.go:194] Successfully downloaded all kic artifacts
	I0327 19:18:56.937059  624194 start.go:360] acquireMachinesLock for ha-738145-m02: {Name:mk3b2e422a30e2cada6c232f7c531bc4d76c6a31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 19:18:56.937127  624194 start.go:364] duration metric: took 44.504µs to acquireMachinesLock for "ha-738145-m02"
	I0327 19:18:56.937156  624194 start.go:96] Skipping create...Using existing machine configuration
	I0327 19:18:56.937165  624194 fix.go:54] fixHost starting: m02
	I0327 19:18:56.937433  624194 cli_runner.go:164] Run: docker container inspect ha-738145-m02 --format={{.State.Status}}
	I0327 19:18:56.952545  624194 fix.go:112] recreateIfNeeded on ha-738145-m02: state=Stopped err=<nil>
	W0327 19:18:56.952572  624194 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 19:18:56.954716  624194 out.go:177] * Restarting existing docker container for "ha-738145-m02" ...
	I0327 19:18:56.956585  624194 cli_runner.go:164] Run: docker start ha-738145-m02
	I0327 19:18:57.204618  624194 cli_runner.go:164] Run: docker container inspect ha-738145-m02 --format={{.State.Status}}
	I0327 19:18:57.222691  624194 kic.go:430] container "ha-738145-m02" state is running.
	I0327 19:18:57.223037  624194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-738145-m02
	I0327 19:18:57.241728  624194 profile.go:142] Saving config to /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/config.json ...
	I0327 19:18:57.241980  624194 machine.go:94] provisionDockerMachine start ...
	I0327 19:18:57.242040  624194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145-m02
	I0327 19:18:57.259914  624194 main.go:141] libmachine: Using SSH client type: native
	I0327 19:18:57.260154  624194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33583 <nil> <nil>}
	I0327 19:18:57.260163  624194 main.go:141] libmachine: About to run SSH command:
	hostname
	I0327 19:18:57.260769  624194 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51616->127.0.0.1:33583: read: connection reset by peer
	I0327 19:19:00.437305  624194 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-738145-m02
	
	I0327 19:19:00.437396  624194 ubuntu.go:169] provisioning hostname "ha-738145-m02"
	I0327 19:19:00.437565  624194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145-m02
	I0327 19:19:00.475421  624194 main.go:141] libmachine: Using SSH client type: native
	I0327 19:19:00.475684  624194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33583 <nil> <nil>}
	I0327 19:19:00.475696  624194 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-738145-m02 && echo "ha-738145-m02" | sudo tee /etc/hostname
	I0327 19:19:00.700600  624194 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-738145-m02
	
	I0327 19:19:00.700678  624194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145-m02
	I0327 19:19:00.725348  624194 main.go:141] libmachine: Using SSH client type: native
	I0327 19:19:00.725665  624194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33583 <nil> <nil>}
	I0327 19:19:00.725698  624194 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-738145-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-738145-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-738145-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0327 19:19:00.879408  624194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 19:19:00.879494  624194 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18517-562206/.minikube CaCertPath:/home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18517-562206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18517-562206/.minikube}
	I0327 19:19:00.879541  624194 ubuntu.go:177] setting up certificates
	I0327 19:19:00.879564  624194 provision.go:84] configureAuth start
	I0327 19:19:00.879637  624194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-738145-m02
	I0327 19:19:00.931805  624194 provision.go:143] copyHostCerts
	I0327 19:19:00.931846  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18517-562206/.minikube/key.pem
	I0327 19:19:00.931880  624194 exec_runner.go:144] found /home/jenkins/minikube-integration/18517-562206/.minikube/key.pem, removing ...
	I0327 19:19:00.931887  624194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18517-562206/.minikube/key.pem
	I0327 19:19:00.931959  624194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18517-562206/.minikube/key.pem (1679 bytes)
	I0327 19:19:00.932039  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18517-562206/.minikube/ca.pem
	I0327 19:19:00.932056  624194 exec_runner.go:144] found /home/jenkins/minikube-integration/18517-562206/.minikube/ca.pem, removing ...
	I0327 19:19:00.932061  624194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18517-562206/.minikube/ca.pem
	I0327 19:19:00.932087  624194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18517-562206/.minikube/ca.pem (1082 bytes)
	I0327 19:19:00.932130  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18517-562206/.minikube/cert.pem
	I0327 19:19:00.932145  624194 exec_runner.go:144] found /home/jenkins/minikube-integration/18517-562206/.minikube/cert.pem, removing ...
	I0327 19:19:00.932149  624194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18517-562206/.minikube/cert.pem
	I0327 19:19:00.932172  624194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18517-562206/.minikube/cert.pem (1123 bytes)
	I0327 19:19:00.932222  624194 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18517-562206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca-key.pem org=jenkins.ha-738145-m02 san=[127.0.0.1 192.168.49.3 ha-738145-m02 localhost minikube]
	I0327 19:19:01.870061  624194 provision.go:177] copyRemoteCerts
	I0327 19:19:01.870215  624194 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0327 19:19:01.870281  624194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145-m02
	I0327 19:19:01.887215  624194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33583 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/ha-738145-m02/id_rsa Username:docker}
	I0327 19:19:01.983407  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0327 19:19:01.983478  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0327 19:19:02.022162  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0327 19:19:02.022232  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0327 19:19:02.050463  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0327 19:19:02.050540  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0327 19:19:02.077812  624194 provision.go:87] duration metric: took 1.19822132s to configureAuth
	I0327 19:19:02.077850  624194 ubuntu.go:193] setting minikube options for container-runtime
	I0327 19:19:02.078117  624194 config.go:182] Loaded profile config "ha-738145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 19:19:02.078247  624194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145-m02
	I0327 19:19:02.104603  624194 main.go:141] libmachine: Using SSH client type: native
	I0327 19:19:02.104858  624194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33583 <nil> <nil>}
	I0327 19:19:02.104873  624194 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0327 19:19:02.463626  624194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0327 19:19:02.463653  624194 machine.go:97] duration metric: took 5.221663248s to provisionDockerMachine
	I0327 19:19:02.463665  624194 start.go:293] postStartSetup for "ha-738145-m02" (driver="docker")
	I0327 19:19:02.463676  624194 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0327 19:19:02.463762  624194 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0327 19:19:02.463816  624194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145-m02
	I0327 19:19:02.481804  624194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33583 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/ha-738145-m02/id_rsa Username:docker}
	I0327 19:19:02.570651  624194 ssh_runner.go:195] Run: cat /etc/os-release
	I0327 19:19:02.573593  624194 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0327 19:19:02.573625  624194 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0327 19:19:02.573637  624194 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0327 19:19:02.573643  624194 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0327 19:19:02.573653  624194 filesync.go:126] Scanning /home/jenkins/minikube-integration/18517-562206/.minikube/addons for local assets ...
	I0327 19:19:02.573707  624194 filesync.go:126] Scanning /home/jenkins/minikube-integration/18517-562206/.minikube/files for local assets ...
	I0327 19:19:02.573780  624194 filesync.go:149] local asset: /home/jenkins/minikube-integration/18517-562206/.minikube/files/etc/ssl/certs/5676232.pem -> 5676232.pem in /etc/ssl/certs
	I0327 19:19:02.573786  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/files/etc/ssl/certs/5676232.pem -> /etc/ssl/certs/5676232.pem
	I0327 19:19:02.573933  624194 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0327 19:19:02.582186  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/files/etc/ssl/certs/5676232.pem --> /etc/ssl/certs/5676232.pem (1708 bytes)
	I0327 19:19:02.605707  624194 start.go:296] duration metric: took 142.028382ms for postStartSetup
	I0327 19:19:02.605808  624194 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0327 19:19:02.605868  624194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145-m02
	I0327 19:19:02.622692  624194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33583 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/ha-738145-m02/id_rsa Username:docker}
	I0327 19:19:02.715052  624194 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0327 19:19:02.724483  624194 fix.go:56] duration metric: took 5.787309885s for fixHost
	I0327 19:19:02.724504  624194 start.go:83] releasing machines lock for "ha-738145-m02", held for 5.787359665s
	I0327 19:19:02.724570  624194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-738145-m02
	I0327 19:19:02.754627  624194 out.go:177] * Found network options:
	I0327 19:19:02.757731  624194 out.go:177]   - NO_PROXY=192.168.49.2
	W0327 19:19:02.760762  624194 proxy.go:119] fail to check proxy env: Error ip not in block
	W0327 19:19:02.760816  624194 proxy.go:119] fail to check proxy env: Error ip not in block
	I0327 19:19:02.760884  624194 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0327 19:19:02.760922  624194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145-m02
	I0327 19:19:02.761202  624194 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0327 19:19:02.761238  624194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145-m02
	I0327 19:19:02.802673  624194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33583 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/ha-738145-m02/id_rsa Username:docker}
	I0327 19:19:02.804222  624194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33583 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/ha-738145-m02/id_rsa Username:docker}
	I0327 19:19:03.091235  624194 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0327 19:19:03.167082  624194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0327 19:19:03.248036  624194 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0327 19:19:03.248105  624194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0327 19:19:03.370887  624194 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0327 19:19:03.370906  624194 start.go:494] detecting cgroup driver to use...
	I0327 19:19:03.370936  624194 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0327 19:19:03.370982  624194 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0327 19:19:03.392997  624194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 19:19:03.410166  624194 docker.go:217] disabling cri-docker service (if available) ...
	I0327 19:19:03.410276  624194 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0327 19:19:03.437553  624194 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0327 19:19:03.479591  624194 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0327 19:19:03.883318  624194 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0327 19:19:04.117515  624194 docker.go:233] disabling docker service ...
	I0327 19:19:04.117640  624194 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0327 19:19:04.141248  624194 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0327 19:19:04.208796  624194 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0327 19:19:04.502441  624194 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0327 19:19:04.756092  624194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0327 19:19:04.806675  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 19:19:04.905061  624194 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0327 19:19:04.905129  624194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 19:19:04.958522  624194 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0327 19:19:04.958594  624194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 19:19:05.025154  624194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 19:19:05.101478  624194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 19:19:05.144495  624194 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0327 19:19:05.204534  624194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 19:19:05.270526  624194 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 19:19:05.302477  624194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 19:19:05.329828  624194 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0327 19:19:05.372432  624194 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0327 19:19:05.398405  624194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 19:19:05.634051  624194 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0327 19:19:07.093493  624194 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.459400957s)
	I0327 19:19:07.093517  624194 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0327 19:19:07.093570  624194 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0327 19:19:07.102219  624194 start.go:562] Will wait 60s for crictl version
	I0327 19:19:07.102292  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:19:07.106293  624194 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0327 19:19:07.195728  624194 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0327 19:19:07.195820  624194 ssh_runner.go:195] Run: crio --version
	I0327 19:19:07.348907  624194 ssh_runner.go:195] Run: crio --version
	I0327 19:19:07.438527  624194 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.24.6 ...
	I0327 19:19:07.441005  624194 out.go:177]   - env NO_PROXY=192.168.49.2
	I0327 19:19:07.443357  624194 cli_runner.go:164] Run: docker network inspect ha-738145 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0327 19:19:07.472037  624194 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0327 19:19:07.475858  624194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 19:19:07.486651  624194 mustload.go:65] Loading cluster: ha-738145
	I0327 19:19:07.486900  624194 config.go:182] Loaded profile config "ha-738145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 19:19:07.487167  624194 cli_runner.go:164] Run: docker container inspect ha-738145 --format={{.State.Status}}
	I0327 19:19:07.515449  624194 host.go:66] Checking if "ha-738145" exists ...
	I0327 19:19:07.515718  624194 certs.go:68] Setting up /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145 for IP: 192.168.49.3
	I0327 19:19:07.515734  624194 certs.go:194] generating shared ca certs ...
	I0327 19:19:07.515751  624194 certs.go:226] acquiring lock for ca certs: {Name:mk95afc777a0fafcf19d589f4cbc5a374d1fe472 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:19:07.515860  624194 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18517-562206/.minikube/ca.key
	I0327 19:19:07.515909  624194 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18517-562206/.minikube/proxy-client-ca.key
	I0327 19:19:07.515919  624194 certs.go:256] generating profile certs ...
	I0327 19:19:07.515993  624194 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/client.key
	I0327 19:19:07.516058  624194 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/apiserver.key.3bea20a9
	I0327 19:19:07.516102  624194 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/proxy-client.key
	I0327 19:19:07.516115  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0327 19:19:07.516128  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0327 19:19:07.516143  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0327 19:19:07.516156  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0327 19:19:07.516171  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0327 19:19:07.516183  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0327 19:19:07.516194  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0327 19:19:07.516207  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0327 19:19:07.516255  624194 certs.go:484] found cert: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/567623.pem (1338 bytes)
	W0327 19:19:07.516307  624194 certs.go:480] ignoring /home/jenkins/minikube-integration/18517-562206/.minikube/certs/567623_empty.pem, impossibly tiny 0 bytes
	I0327 19:19:07.516320  624194 certs.go:484] found cert: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca-key.pem (1679 bytes)
	I0327 19:19:07.516346  624194 certs.go:484] found cert: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca.pem (1082 bytes)
	I0327 19:19:07.516376  624194 certs.go:484] found cert: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/cert.pem (1123 bytes)
	I0327 19:19:07.516401  624194 certs.go:484] found cert: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/key.pem (1679 bytes)
	I0327 19:19:07.516454  624194 certs.go:484] found cert: /home/jenkins/minikube-integration/18517-562206/.minikube/files/etc/ssl/certs/5676232.pem (1708 bytes)
	I0327 19:19:07.516498  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/files/etc/ssl/certs/5676232.pem -> /usr/share/ca-certificates/5676232.pem
	I0327 19:19:07.516513  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0327 19:19:07.516524  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/567623.pem -> /usr/share/ca-certificates/567623.pem
	I0327 19:19:07.516584  624194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145
	I0327 19:19:07.540174  624194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33578 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/ha-738145/id_rsa Username:docker}
	I0327 19:19:07.626173  624194 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0327 19:19:07.635848  624194 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0327 19:19:07.666507  624194 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0327 19:19:07.676970  624194 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0327 19:19:07.712183  624194 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0327 19:19:07.722718  624194 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0327 19:19:07.751435  624194 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0327 19:19:07.764479  624194 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0327 19:19:07.794296  624194 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0327 19:19:07.798237  624194 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0327 19:19:07.810834  624194 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0327 19:19:07.821684  624194 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0327 19:19:07.836656  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0327 19:19:07.873306  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0327 19:19:07.902671  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0327 19:19:07.946322  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0327 19:19:07.982904  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0327 19:19:08.028113  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0327 19:19:08.067600  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0327 19:19:08.107245  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0327 19:19:08.143091  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/files/etc/ssl/certs/5676232.pem --> /usr/share/ca-certificates/5676232.pem (1708 bytes)
	I0327 19:19:08.186502  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0327 19:19:08.215660  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/certs/567623.pem --> /usr/share/ca-certificates/567623.pem (1338 bytes)
	I0327 19:19:08.257007  624194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0327 19:19:08.277046  624194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0327 19:19:08.295099  624194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0327 19:19:08.314680  624194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0327 19:19:08.334837  624194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0327 19:19:08.355834  624194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0327 19:19:08.382159  624194 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0327 19:19:08.403289  624194 ssh_runner.go:195] Run: openssl version
	I0327 19:19:08.409520  624194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5676232.pem && ln -fs /usr/share/ca-certificates/5676232.pem /etc/ssl/certs/5676232.pem"
	I0327 19:19:08.420716  624194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5676232.pem
	I0327 19:19:08.424989  624194 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 19:06 /usr/share/ca-certificates/5676232.pem
	I0327 19:19:08.425080  624194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5676232.pem
	I0327 19:19:08.433025  624194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5676232.pem /etc/ssl/certs/3ec20f2e.0"
	I0327 19:19:08.442735  624194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0327 19:19:08.454970  624194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0327 19:19:08.461956  624194 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 18:59 /usr/share/ca-certificates/minikubeCA.pem
	I0327 19:19:08.462079  624194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0327 19:19:08.469599  624194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0327 19:19:08.479163  624194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/567623.pem && ln -fs /usr/share/ca-certificates/567623.pem /etc/ssl/certs/567623.pem"
	I0327 19:19:08.489131  624194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/567623.pem
	I0327 19:19:08.493281  624194 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 19:06 /usr/share/ca-certificates/567623.pem
	I0327 19:19:08.493393  624194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/567623.pem
	I0327 19:19:08.500735  624194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/567623.pem /etc/ssl/certs/51391683.0"
	I0327 19:19:08.510733  624194 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0327 19:19:08.515089  624194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0327 19:19:08.522599  624194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0327 19:19:08.530663  624194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0327 19:19:08.537922  624194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0327 19:19:08.545592  624194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0327 19:19:08.553444  624194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0327 19:19:08.564977  624194 kubeadm.go:928] updating node {m02 192.168.49.3 8443 v1.29.3 crio true true} ...
	I0327 19:19:08.565143  624194 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-738145-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-738145 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0327 19:19:08.565192  624194 kube-vip.go:111] generating kube-vip config ...
	I0327 19:19:08.565291  624194 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0327 19:19:08.579581  624194 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0327 19:19:08.579730  624194 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0327 19:19:08.579843  624194 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0327 19:19:08.590345  624194 binaries.go:44] Found k8s binaries, skipping transfer
	I0327 19:19:08.590469  624194 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0327 19:19:08.600215  624194 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0327 19:19:08.619432  624194 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0327 19:19:08.639013  624194 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0327 19:19:08.663496  624194 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0327 19:19:08.670048  624194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 19:19:08.681978  624194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 19:19:08.810115  624194 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 19:19:08.822912  624194 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0327 19:19:08.827865  624194 out.go:177] * Verifying Kubernetes components...
	I0327 19:19:08.826401  624194 config.go:182] Loaded profile config "ha-738145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 19:19:08.831297  624194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 19:19:08.979734  624194 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 19:19:09.002980  624194 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18517-562206/kubeconfig
	I0327 19:19:09.003387  624194 kapi.go:59] client config for ha-738145: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/client.crt", KeyFile:"/home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/client.key", CAFile:"/home/jenkins/minikube-integration/18517-562206/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1700360), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0327 19:19:09.003477  624194 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0327 19:19:09.003777  624194 node_ready.go:35] waiting up to 6m0s for node "ha-738145-m02" to be "Ready" ...
	I0327 19:19:09.003892  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m02
	I0327 19:19:09.003905  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:09.003914  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:09.003922  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:20.769497  624194 round_trippers.go:574] Response Status: 500 Internal Server Error in 11765 milliseconds
	I0327 19:19:20.769844  624194 node_ready.go:53] error getting node "ha-738145-m02": etcdserver: request timed out
	I0327 19:19:20.769932  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m02
	I0327 19:19:20.769945  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:20.769953  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:20.769958  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:25.303418  624194 round_trippers.go:574] Response Status: 200 OK in 4533 milliseconds
	I0327 19:19:25.307138  624194 node_ready.go:49] node "ha-738145-m02" has status "Ready":"True"
	I0327 19:19:25.307160  624194 node_ready.go:38] duration metric: took 16.303354099s for node "ha-738145-m02" to be "Ready" ...
	I0327 19:19:25.307171  624194 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 19:19:25.307241  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0327 19:19:25.307248  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:25.307256  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:25.307263  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:25.404749  624194 round_trippers.go:574] Response Status: 200 OK in 97 milliseconds
	I0327 19:19:25.419274  624194 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-cq2vx" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:25.419451  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-cq2vx
	I0327 19:19:25.419482  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:25.419509  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:25.419527  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:25.449152  624194 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I0327 19:19:25.452397  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:19:25.452415  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:25.452424  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:25.452430  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:25.458727  624194 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0327 19:19:25.460143  624194 pod_ready.go:92] pod "coredns-76f75df574-cq2vx" in "kube-system" namespace has status "Ready":"True"
	I0327 19:19:25.460163  624194 pod_ready.go:81] duration metric: took 40.803049ms for pod "coredns-76f75df574-cq2vx" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:25.460174  624194 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-knk2g" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:25.460238  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-knk2g
	I0327 19:19:25.460243  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:25.460251  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:25.460255  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:25.465260  624194 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 19:19:25.466843  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:19:25.466902  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:25.466926  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:25.466949  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:25.477762  624194 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0327 19:19:25.480535  624194 pod_ready.go:92] pod "coredns-76f75df574-knk2g" in "kube-system" namespace has status "Ready":"True"
	I0327 19:19:25.480600  624194 pod_ready.go:81] duration metric: took 20.418632ms for pod "coredns-76f75df574-knk2g" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:25.480628  624194 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-738145" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:25.480708  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-738145
	I0327 19:19:25.480735  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:25.480757  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:25.480788  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:25.486945  624194 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0327 19:19:25.488281  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:19:25.488339  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:25.488364  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:25.488384  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:25.493646  624194 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 19:19:25.494634  624194 pod_ready.go:92] pod "etcd-ha-738145" in "kube-system" namespace has status "Ready":"True"
	I0327 19:19:25.494691  624194 pod_ready.go:81] duration metric: took 14.040172ms for pod "etcd-ha-738145" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:25.494717  624194 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-738145-m02" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:25.494809  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-738145-m02
	I0327 19:19:25.494837  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:25.494861  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:25.494881  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:25.498710  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:19:25.499561  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m02
	I0327 19:19:25.499609  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:25.499634  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:25.499656  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:25.512161  624194 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0327 19:19:25.513595  624194 pod_ready.go:92] pod "etcd-ha-738145-m02" in "kube-system" namespace has status "Ready":"True"
	I0327 19:19:25.513666  624194 pod_ready.go:81] duration metric: took 18.918604ms for pod "etcd-ha-738145-m02" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:25.513692  624194 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-738145-m03" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:25.513769  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-738145-m03
	I0327 19:19:25.513801  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:25.513827  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:25.513848  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:25.527639  624194 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0327 19:19:25.528819  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m03
	I0327 19:19:25.528881  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:25.528905  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:25.528925  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:25.551134  624194 round_trippers.go:574] Response Status: 404 Not Found in 22 milliseconds
	I0327 19:19:25.551489  624194 pod_ready.go:97] node "ha-738145-m03" hosting pod "etcd-ha-738145-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-738145-m03": nodes "ha-738145-m03" not found
	I0327 19:19:25.551544  624194 pod_ready.go:81] duration metric: took 37.831496ms for pod "etcd-ha-738145-m03" in "kube-system" namespace to be "Ready" ...
	E0327 19:19:25.551570  624194 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-738145-m03" hosting pod "etcd-ha-738145-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-738145-m03": nodes "ha-738145-m03" not found
	I0327 19:19:25.551624  624194 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-738145" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:25.707991  624194 request.go:629] Waited for 156.280649ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-738145
	I0327 19:19:25.708105  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-738145
	I0327 19:19:25.708163  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:25.708191  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:25.708214  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:25.719971  624194 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0327 19:19:25.907295  624194 request.go:629] Waited for 186.232647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:19:25.907418  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:19:25.907451  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:25.907477  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:25.907494  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:25.911637  624194 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 19:19:25.912683  624194 pod_ready.go:92] pod "kube-apiserver-ha-738145" in "kube-system" namespace has status "Ready":"True"
	I0327 19:19:25.912738  624194 pod_ready.go:81] duration metric: took 361.086704ms for pod "kube-apiserver-ha-738145" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:25.912765  624194 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-738145-m02" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:26.108240  624194 request.go:629] Waited for 195.358662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-738145-m02
	I0327 19:19:26.108362  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-738145-m02
	I0327 19:19:26.108403  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:26.108425  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:26.108444  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:26.111508  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:19:26.307994  624194 request.go:629] Waited for 195.302104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-738145-m02
	I0327 19:19:26.308123  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m02
	I0327 19:19:26.308155  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:26.308187  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:26.308209  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:26.310981  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:19:26.311975  624194 pod_ready.go:92] pod "kube-apiserver-ha-738145-m02" in "kube-system" namespace has status "Ready":"True"
	I0327 19:19:26.312034  624194 pod_ready.go:81] duration metric: took 399.249021ms for pod "kube-apiserver-ha-738145-m02" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:26.312063  624194 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-738145-m03" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:26.507408  624194 request.go:629] Waited for 195.237686ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-738145-m03
	I0327 19:19:26.507529  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-738145-m03
	I0327 19:19:26.507547  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:26.507556  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:26.507562  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:26.511829  624194 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 19:19:26.707274  624194 request.go:629] Waited for 194.242057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-738145-m03
	I0327 19:19:26.707394  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m03
	I0327 19:19:26.707426  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:26.707458  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:26.707478  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:26.710264  624194 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0327 19:19:26.710656  624194 pod_ready.go:97] node "ha-738145-m03" hosting pod "kube-apiserver-ha-738145-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-738145-m03": nodes "ha-738145-m03" not found
	I0327 19:19:26.710707  624194 pod_ready.go:81] duration metric: took 398.611075ms for pod "kube-apiserver-ha-738145-m03" in "kube-system" namespace to be "Ready" ...
	E0327 19:19:26.710733  624194 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-738145-m03" hosting pod "kube-apiserver-ha-738145-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-738145-m03": nodes "ha-738145-m03" not found
	I0327 19:19:26.710754  624194 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-738145" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:26.908217  624194 request.go:629] Waited for 197.342684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-738145
	I0327 19:19:26.908320  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-738145
	I0327 19:19:26.908342  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:26.908387  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:26.908405  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:26.914799  624194 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0327 19:19:27.108208  624194 request.go:629] Waited for 192.300266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:19:27.108342  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:19:27.108374  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:27.108402  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:27.108427  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:27.129230  624194 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0327 19:19:27.130030  624194 pod_ready.go:92] pod "kube-controller-manager-ha-738145" in "kube-system" namespace has status "Ready":"True"
	I0327 19:19:27.130056  624194 pod_ready.go:81] duration metric: took 419.25995ms for pod "kube-controller-manager-ha-738145" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:27.130070  624194 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-738145-m02" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:27.307345  624194 request.go:629] Waited for 177.210155ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-738145-m02
	I0327 19:19:27.307473  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-738145-m02
	I0327 19:19:27.307509  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:27.307536  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:27.307557  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:27.310707  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:19:27.508284  624194 request.go:629] Waited for 196.28006ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-738145-m02
	I0327 19:19:27.508403  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m02
	I0327 19:19:27.508439  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:27.508466  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:27.508488  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:27.511333  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:19:27.512307  624194 pod_ready.go:92] pod "kube-controller-manager-ha-738145-m02" in "kube-system" namespace has status "Ready":"True"
	I0327 19:19:27.512364  624194 pod_ready.go:81] duration metric: took 382.28591ms for pod "kube-controller-manager-ha-738145-m02" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:27.512390  624194 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-738145-m03" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:27.707600  624194 request.go:629] Waited for 195.113666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-738145-m03
	I0327 19:19:27.707710  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-738145-m03
	I0327 19:19:27.707760  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:27.707796  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:27.707815  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:27.714815  624194 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0327 19:19:27.907347  624194 request.go:629] Waited for 191.150988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-738145-m03
	I0327 19:19:27.907450  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m03
	I0327 19:19:27.907533  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:27.907564  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:27.907584  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:27.910438  624194 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0327 19:19:27.910822  624194 pod_ready.go:97] node "ha-738145-m03" hosting pod "kube-controller-manager-ha-738145-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-738145-m03": nodes "ha-738145-m03" not found
	I0327 19:19:27.910866  624194 pod_ready.go:81] duration metric: took 398.45553ms for pod "kube-controller-manager-ha-738145-m03" in "kube-system" namespace to be "Ready" ...
	E0327 19:19:27.910904  624194 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-738145-m03" hosting pod "kube-controller-manager-ha-738145-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-738145-m03": nodes "ha-738145-m03" not found
	I0327 19:19:27.910931  624194 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b8vjf" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:28.108022  624194 request.go:629] Waited for 196.995978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b8vjf
	I0327 19:19:28.108140  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b8vjf
	I0327 19:19:28.108207  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:28.108245  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:28.108277  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:28.111698  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:19:28.308096  624194 request.go:629] Waited for 195.336335ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:19:28.308210  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:19:28.308221  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:28.308231  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:28.308238  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:28.311860  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:19:28.312752  624194 pod_ready.go:92] pod "kube-proxy-b8vjf" in "kube-system" namespace has status "Ready":"True"
	I0327 19:19:28.312817  624194 pod_ready.go:81] duration metric: took 401.853395ms for pod "kube-proxy-b8vjf" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:28.312840  624194 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fh7bn" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:28.508206  624194 request.go:629] Waited for 195.295909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fh7bn
	I0327 19:19:28.508322  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fh7bn
	I0327 19:19:28.508334  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:28.508357  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:28.508375  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:28.511816  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:19:28.707839  624194 request.go:629] Waited for 195.341956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-738145-m04
	I0327 19:19:28.707925  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m04
	I0327 19:19:28.707937  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:28.707946  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:28.707950  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:28.711558  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:19:28.712385  624194 pod_ready.go:92] pod "kube-proxy-fh7bn" in "kube-system" namespace has status "Ready":"True"
	I0327 19:19:28.712405  624194 pod_ready.go:81] duration metric: took 399.555832ms for pod "kube-proxy-fh7bn" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:28.712436  624194 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p46p2" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:28.907297  624194 request.go:629] Waited for 194.78899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p46p2
	I0327 19:19:28.907385  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p46p2
	I0327 19:19:28.907456  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:28.907468  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:28.907474  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:28.910767  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:19:29.107777  624194 request.go:629] Waited for 196.314472ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-738145-m03
	I0327 19:19:29.107830  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m03
	I0327 19:19:29.107836  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:29.107851  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:29.107872  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:29.110482  624194 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0327 19:19:29.110815  624194 pod_ready.go:97] node "ha-738145-m03" hosting pod "kube-proxy-p46p2" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-738145-m03": nodes "ha-738145-m03" not found
	I0327 19:19:29.110859  624194 pod_ready.go:81] duration metric: took 398.39185ms for pod "kube-proxy-p46p2" in "kube-system" namespace to be "Ready" ...
	E0327 19:19:29.110878  624194 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-738145-m03" hosting pod "kube-proxy-p46p2" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-738145-m03": nodes "ha-738145-m03" not found
	I0327 19:19:29.110886  624194 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vgfbw" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:29.308059  624194 request.go:629] Waited for 197.039571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vgfbw
	I0327 19:19:29.308173  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vgfbw
	I0327 19:19:29.308195  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:29.308235  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:29.308252  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:29.312455  624194 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 19:19:29.508290  624194 request.go:629] Waited for 194.641315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-738145-m02
	I0327 19:19:29.508371  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m02
	I0327 19:19:29.508382  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:29.508391  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:29.508397  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:29.511564  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:19:29.512210  624194 pod_ready.go:92] pod "kube-proxy-vgfbw" in "kube-system" namespace has status "Ready":"True"
	I0327 19:19:29.512232  624194 pod_ready.go:81] duration metric: took 401.335998ms for pod "kube-proxy-vgfbw" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:29.512244  624194 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-738145" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:29.708085  624194 request.go:629] Waited for 195.765223ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-738145
	I0327 19:19:29.708208  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-738145
	I0327 19:19:29.708221  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:29.708230  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:29.708237  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:29.711568  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:19:29.907578  624194 request.go:629] Waited for 195.313492ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:19:29.907630  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:19:29.907636  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:29.907644  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:29.907651  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:29.910677  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:19:29.911210  624194 pod_ready.go:92] pod "kube-scheduler-ha-738145" in "kube-system" namespace has status "Ready":"True"
	I0327 19:19:29.911232  624194 pod_ready.go:81] duration metric: took 398.980507ms for pod "kube-scheduler-ha-738145" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:29.911245  624194 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-738145-m02" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:30.107670  624194 request.go:629] Waited for 196.346471ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-738145-m02
	I0327 19:19:30.107763  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-738145-m02
	I0327 19:19:30.107774  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:30.107785  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:30.107802  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:30.111627  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:19:30.307643  624194 request.go:629] Waited for 194.358798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-738145-m02
	I0327 19:19:30.307702  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m02
	I0327 19:19:30.307708  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:30.307722  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:30.307726  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:30.310843  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:19:30.311424  624194 pod_ready.go:92] pod "kube-scheduler-ha-738145-m02" in "kube-system" namespace has status "Ready":"True"
	I0327 19:19:30.311474  624194 pod_ready.go:81] duration metric: took 400.218752ms for pod "kube-scheduler-ha-738145-m02" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:30.311503  624194 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-738145-m03" in "kube-system" namespace to be "Ready" ...
	I0327 19:19:30.507322  624194 request.go:629] Waited for 195.731123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-738145-m03
	I0327 19:19:30.507424  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-738145-m03
	I0327 19:19:30.507437  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:30.507446  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:30.507463  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:30.510463  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:19:30.707602  624194 request.go:629] Waited for 196.318401ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-738145-m03
	I0327 19:19:30.707662  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m03
	I0327 19:19:30.707672  624194 round_trippers.go:469] Request Headers:
	I0327 19:19:30.707696  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:19:30.707702  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:19:30.710793  624194 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0327 19:19:30.710917  624194 pod_ready.go:97] node "ha-738145-m03" hosting pod "kube-scheduler-ha-738145-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-738145-m03": nodes "ha-738145-m03" not found
	I0327 19:19:30.710945  624194 pod_ready.go:81] duration metric: took 399.405333ms for pod "kube-scheduler-ha-738145-m03" in "kube-system" namespace to be "Ready" ...
	E0327 19:19:30.710956  624194 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-738145-m03" hosting pod "kube-scheduler-ha-738145-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-738145-m03": nodes "ha-738145-m03" not found
	I0327 19:19:30.710968  624194 pod_ready.go:38] duration metric: took 5.403786807s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 19:19:30.710990  624194 api_server.go:52] waiting for apiserver process to appear ...
	I0327 19:19:30.711051  624194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 19:19:30.722098  624194 api_server.go:72] duration metric: took 21.899092477s to wait for apiserver process to appear ...
	I0327 19:19:30.722121  624194 api_server.go:88] waiting for apiserver healthz status ...
	I0327 19:19:30.722143  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:30.730521  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:30.730555  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:31.223077  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:31.230905  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:31.230933  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:31.722267  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:31.730295  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:31.730332  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:32.222984  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:32.231028  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:32.231054  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:32.722187  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:32.730425  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:32.730496  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:33.223014  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:33.231160  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:33.231190  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:33.722396  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:33.730449  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:33.730476  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:34.222885  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:34.230826  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:34.230866  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:34.722283  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:34.730160  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:34.730188  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:35.222283  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:35.230280  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:35.230324  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:35.722848  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:35.730698  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:35.730733  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:36.223051  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:36.231790  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:36.231819  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:36.722276  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:36.730156  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:36.730186  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:37.222900  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:37.231422  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:37.231451  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:37.723064  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:37.730877  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:37.730909  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:38.222268  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:38.234661  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:38.234689  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:38.723041  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:38.733748  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:38.733779  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:39.222292  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:39.230190  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:39.230220  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:39.723056  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:39.731303  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:39.731335  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:40.222952  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:40.235175  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:40.235221  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:40.722790  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:40.730434  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:40.730459  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:41.223070  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:41.232244  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:41.232283  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:41.722681  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:41.730573  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:41.730606  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:42.223161  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:42.233121  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:42.233155  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:42.722804  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:42.730695  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:42.730725  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:43.222282  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:43.230076  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:43.230108  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:43.722642  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:43.731115  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:43.731150  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:44.222267  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:44.230564  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:44.230593  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:44.722219  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:44.778573  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:44.778656  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:45.223052  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:45.271667  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:45.271796  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:45.722260  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:45.742876  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:45.742901  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:46.222251  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:46.234488  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:46.234569  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:46.723050  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:46.732192  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:46.732275  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:47.222770  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:47.232580  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:47.232671  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:47.722292  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:47.730371  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:47.730402  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:48.223092  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:48.231002  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:48.231031  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:48.722256  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:48.730019  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:48.730056  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:49.222255  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:49.230159  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:49.230195  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:49.723098  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:49.731173  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:49.731252  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:50.222604  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:50.231988  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:50.232021  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:50.723138  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:50.732252  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:50.732285  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:51.222840  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:51.231542  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:51.231579  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:51.723139  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:51.730742  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:51.730769  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:52.223048  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:52.245646  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:52.245671  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:52.722882  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:52.730625  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:52.730669  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:53.223058  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:53.230988  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:53.231023  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:53.722588  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:53.730494  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:53.730527  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:54.223092  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:54.230883  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:54.230927  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:54.722365  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:54.732575  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:54.732607  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:55.223052  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:55.230852  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:55.230901  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:55.722268  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:55.730056  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:55.730093  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:56.222434  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:56.230287  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:56.230320  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:56.722905  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:56.737815  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:56.737850  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:57.222878  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:57.230638  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:57.230665  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:57.723068  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:57.732757  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:57.732789  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:58.222967  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:58.231241  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:58.231270  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:58.722789  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:58.730575  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:58.730620  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:59.222979  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:59.231286  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:59.231323  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:19:59.723060  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:19:59.731055  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:19:59.731136  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:20:00.222262  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:20:00.295866  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:20:00.296273  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:20:00.722867  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:20:00.731231  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:20:00.731266  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:20:01.222835  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:20:01.230816  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:20:01.230845  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:20:01.722405  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:20:01.734659  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:20:01.734704  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:20:02.223160  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:20:02.231011  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:20:02.231042  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:20:02.722313  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:20:02.730848  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:20:02.730883  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:20:03.222471  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:20:03.230344  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:20:03.230377  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:20:03.722985  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:20:03.731673  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:20:03.731705  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:20:04.223212  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:20:04.231088  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:20:04.231115  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:20:04.722910  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:20:04.730840  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:20:04.730887  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:20:05.222320  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:20:05.230194  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:20:05.230236  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:20:05.722794  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:20:05.730774  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:20:05.730802  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:20:06.222276  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:20:06.230256  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:20:06.230285  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:20:06.722869  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:20:06.731464  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:20:06.731493  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:20:07.223065  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:20:07.230808  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:20:07.230848  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:20:07.722361  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:20:07.744613  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:20:07.744664  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:20:08.223043  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:20:08.231111  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:20:08.231143  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:20:08.722182  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:20:08.730338  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:20:08.730390  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:20:09.222543  624194 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0327 19:20:09.222697  624194 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0327 19:20:09.291692  624194 cri.go:89] found id: "bc5840171099298e4c0b77c0501a1eef6567ec7bbfc62904a86686352caf6b11"
	I0327 19:20:09.291759  624194 cri.go:89] found id: "2932945fc6e72cf714595f8fd974a9b497e5c5f8fd3b552fa7fb75e62b1f2d4a"
	I0327 19:20:09.291784  624194 cri.go:89] found id: ""
	I0327 19:20:09.291808  624194 logs.go:276] 2 containers: [bc5840171099298e4c0b77c0501a1eef6567ec7bbfc62904a86686352caf6b11 2932945fc6e72cf714595f8fd974a9b497e5c5f8fd3b552fa7fb75e62b1f2d4a]
	I0327 19:20:09.291896  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:09.295590  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:09.299085  624194 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0327 19:20:09.299151  624194 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0327 19:20:09.337128  624194 cri.go:89] found id: "82750d28d9d8ada01f0f37077bcb30dd8433dd6e5ded339fd152df9074466ee0"
	I0327 19:20:09.337152  624194 cri.go:89] found id: "380b4d5be923353cff4d9750074c5261b94dbc9a5437f0df351c602d85cc3884"
	I0327 19:20:09.337157  624194 cri.go:89] found id: ""
	I0327 19:20:09.337165  624194 logs.go:276] 2 containers: [82750d28d9d8ada01f0f37077bcb30dd8433dd6e5ded339fd152df9074466ee0 380b4d5be923353cff4d9750074c5261b94dbc9a5437f0df351c602d85cc3884]
	I0327 19:20:09.337219  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:09.340718  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:09.345355  624194 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0327 19:20:09.345472  624194 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0327 19:20:09.384212  624194 cri.go:89] found id: ""
	I0327 19:20:09.385200  624194 logs.go:276] 0 containers: []
	W0327 19:20:09.385211  624194 logs.go:278] No container was found matching "coredns"
	I0327 19:20:09.385219  624194 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0327 19:20:09.385279  624194 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0327 19:20:09.426859  624194 cri.go:89] found id: "574263b306aedb2bf16736cb6320603def79c2d366780ad7a6a0af9689cb059e"
	I0327 19:20:09.426880  624194 cri.go:89] found id: "84d49a67ef1e2419f5830ee1add2f4adfd9eac7cf600add2577dbdc237b459b9"
	I0327 19:20:09.426885  624194 cri.go:89] found id: ""
	I0327 19:20:09.426892  624194 logs.go:276] 2 containers: [574263b306aedb2bf16736cb6320603def79c2d366780ad7a6a0af9689cb059e 84d49a67ef1e2419f5830ee1add2f4adfd9eac7cf600add2577dbdc237b459b9]
	I0327 19:20:09.426948  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:09.430822  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:09.434593  624194 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0327 19:20:09.434688  624194 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0327 19:20:09.472423  624194 cri.go:89] found id: "3fab233198555c1f02edca0fdc38f6dfe6b3567208b86d73760642838f0bb199"
	I0327 19:20:09.472446  624194 cri.go:89] found id: ""
	I0327 19:20:09.472455  624194 logs.go:276] 1 containers: [3fab233198555c1f02edca0fdc38f6dfe6b3567208b86d73760642838f0bb199]
	I0327 19:20:09.472551  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:09.476172  624194 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0327 19:20:09.476270  624194 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0327 19:20:09.514433  624194 cri.go:89] found id: "b0ae7be377f5c5dcad1ff7d6e445ad95929eb6075669cc7961cb3767c102d68b"
	I0327 19:20:09.514471  624194 cri.go:89] found id: "cd140d1478d9ccdc23dee2b9b8c295f878e464d6aa44e1650734b8f3ea9b3873"
	I0327 19:20:09.514476  624194 cri.go:89] found id: ""
	I0327 19:20:09.514484  624194 logs.go:276] 2 containers: [b0ae7be377f5c5dcad1ff7d6e445ad95929eb6075669cc7961cb3767c102d68b cd140d1478d9ccdc23dee2b9b8c295f878e464d6aa44e1650734b8f3ea9b3873]
	I0327 19:20:09.514550  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:09.518385  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:09.521508  624194 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0327 19:20:09.521590  624194 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0327 19:20:09.561871  624194 cri.go:89] found id: "a1ab61cd02a31b1d66671b1bf54236c68e3833cc817491464968e10a321b60dd"
	I0327 19:20:09.561971  624194 cri.go:89] found id: ""
	I0327 19:20:09.561994  624194 logs.go:276] 1 containers: [a1ab61cd02a31b1d66671b1bf54236c68e3833cc817491464968e10a321b60dd]
	I0327 19:20:09.562085  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:09.565541  624194 logs.go:123] Gathering logs for kube-apiserver [2932945fc6e72cf714595f8fd974a9b497e5c5f8fd3b552fa7fb75e62b1f2d4a] ...
	I0327 19:20:09.565608  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2932945fc6e72cf714595f8fd974a9b497e5c5f8fd3b552fa7fb75e62b1f2d4a"
	I0327 19:20:09.603659  624194 logs.go:123] Gathering logs for kube-proxy [3fab233198555c1f02edca0fdc38f6dfe6b3567208b86d73760642838f0bb199] ...
	I0327 19:20:09.603688  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fab233198555c1f02edca0fdc38f6dfe6b3567208b86d73760642838f0bb199"
	I0327 19:20:09.644404  624194 logs.go:123] Gathering logs for kube-controller-manager [cd140d1478d9ccdc23dee2b9b8c295f878e464d6aa44e1650734b8f3ea9b3873] ...
	I0327 19:20:09.644431  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd140d1478d9ccdc23dee2b9b8c295f878e464d6aa44e1650734b8f3ea9b3873"
	I0327 19:20:09.685063  624194 logs.go:123] Gathering logs for kubelet ...
	I0327 19:20:09.685108  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 19:20:09.764861  624194 logs.go:123] Gathering logs for describe nodes ...
	I0327 19:20:09.764938  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 19:20:10.459835  624194 logs.go:123] Gathering logs for kube-apiserver [bc5840171099298e4c0b77c0501a1eef6567ec7bbfc62904a86686352caf6b11] ...
	I0327 19:20:10.459912  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc5840171099298e4c0b77c0501a1eef6567ec7bbfc62904a86686352caf6b11"
	I0327 19:20:10.535314  624194 logs.go:123] Gathering logs for CRI-O ...
	I0327 19:20:10.535387  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0327 19:20:10.622485  624194 logs.go:123] Gathering logs for kube-scheduler [84d49a67ef1e2419f5830ee1add2f4adfd9eac7cf600add2577dbdc237b459b9] ...
	I0327 19:20:10.622561  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84d49a67ef1e2419f5830ee1add2f4adfd9eac7cf600add2577dbdc237b459b9"
	I0327 19:20:10.668752  624194 logs.go:123] Gathering logs for kube-controller-manager [b0ae7be377f5c5dcad1ff7d6e445ad95929eb6075669cc7961cb3767c102d68b] ...
	I0327 19:20:10.668780  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b0ae7be377f5c5dcad1ff7d6e445ad95929eb6075669cc7961cb3767c102d68b"
	I0327 19:20:10.781990  624194 logs.go:123] Gathering logs for etcd [82750d28d9d8ada01f0f37077bcb30dd8433dd6e5ded339fd152df9074466ee0] ...
	I0327 19:20:10.782023  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82750d28d9d8ada01f0f37077bcb30dd8433dd6e5ded339fd152df9074466ee0"
	I0327 19:20:10.862379  624194 logs.go:123] Gathering logs for kube-scheduler [574263b306aedb2bf16736cb6320603def79c2d366780ad7a6a0af9689cb059e] ...
	I0327 19:20:10.862416  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 574263b306aedb2bf16736cb6320603def79c2d366780ad7a6a0af9689cb059e"
	I0327 19:20:10.938277  624194 logs.go:123] Gathering logs for kindnet [a1ab61cd02a31b1d66671b1bf54236c68e3833cc817491464968e10a321b60dd] ...
	I0327 19:20:10.938313  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1ab61cd02a31b1d66671b1bf54236c68e3833cc817491464968e10a321b60dd"
	I0327 19:20:10.989837  624194 logs.go:123] Gathering logs for container status ...
	I0327 19:20:10.989867  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 19:20:11.081397  624194 logs.go:123] Gathering logs for dmesg ...
	I0327 19:20:11.081427  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 19:20:11.117337  624194 logs.go:123] Gathering logs for etcd [380b4d5be923353cff4d9750074c5261b94dbc9a5437f0df351c602d85cc3884] ...
	I0327 19:20:11.117373  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380b4d5be923353cff4d9750074c5261b94dbc9a5437f0df351c602d85cc3884"
	I0327 19:20:13.703965  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:20:13.711761  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 19:20:13.711797  624194 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 19:20:13.711822  624194 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0327 19:20:13.711891  624194 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0327 19:20:13.783209  624194 cri.go:89] found id: "bc5840171099298e4c0b77c0501a1eef6567ec7bbfc62904a86686352caf6b11"
	I0327 19:20:13.783229  624194 cri.go:89] found id: "2932945fc6e72cf714595f8fd974a9b497e5c5f8fd3b552fa7fb75e62b1f2d4a"
	I0327 19:20:13.783241  624194 cri.go:89] found id: ""
	I0327 19:20:13.783249  624194 logs.go:276] 2 containers: [bc5840171099298e4c0b77c0501a1eef6567ec7bbfc62904a86686352caf6b11 2932945fc6e72cf714595f8fd974a9b497e5c5f8fd3b552fa7fb75e62b1f2d4a]
	I0327 19:20:13.783304  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:13.791059  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:13.796138  624194 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0327 19:20:13.796216  624194 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0327 19:20:13.858496  624194 cri.go:89] found id: "82750d28d9d8ada01f0f37077bcb30dd8433dd6e5ded339fd152df9074466ee0"
	I0327 19:20:13.858522  624194 cri.go:89] found id: "380b4d5be923353cff4d9750074c5261b94dbc9a5437f0df351c602d85cc3884"
	I0327 19:20:13.858527  624194 cri.go:89] found id: ""
	I0327 19:20:13.858535  624194 logs.go:276] 2 containers: [82750d28d9d8ada01f0f37077bcb30dd8433dd6e5ded339fd152df9074466ee0 380b4d5be923353cff4d9750074c5261b94dbc9a5437f0df351c602d85cc3884]
	I0327 19:20:13.858591  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:13.862902  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:13.868108  624194 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0327 19:20:13.868185  624194 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0327 19:20:13.924257  624194 cri.go:89] found id: ""
	I0327 19:20:13.924283  624194 logs.go:276] 0 containers: []
	W0327 19:20:13.924293  624194 logs.go:278] No container was found matching "coredns"
	I0327 19:20:13.924299  624194 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0327 19:20:13.924357  624194 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0327 19:20:13.977290  624194 cri.go:89] found id: "574263b306aedb2bf16736cb6320603def79c2d366780ad7a6a0af9689cb059e"
	I0327 19:20:13.977315  624194 cri.go:89] found id: "84d49a67ef1e2419f5830ee1add2f4adfd9eac7cf600add2577dbdc237b459b9"
	I0327 19:20:13.977320  624194 cri.go:89] found id: ""
	I0327 19:20:13.977328  624194 logs.go:276] 2 containers: [574263b306aedb2bf16736cb6320603def79c2d366780ad7a6a0af9689cb059e 84d49a67ef1e2419f5830ee1add2f4adfd9eac7cf600add2577dbdc237b459b9]
	I0327 19:20:13.977385  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:13.981857  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:13.985802  624194 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0327 19:20:13.985878  624194 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0327 19:20:14.030806  624194 cri.go:89] found id: "3fab233198555c1f02edca0fdc38f6dfe6b3567208b86d73760642838f0bb199"
	I0327 19:20:14.030831  624194 cri.go:89] found id: ""
	I0327 19:20:14.030839  624194 logs.go:276] 1 containers: [3fab233198555c1f02edca0fdc38f6dfe6b3567208b86d73760642838f0bb199]
	I0327 19:20:14.030896  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:14.035101  624194 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0327 19:20:14.035195  624194 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0327 19:20:14.075244  624194 cri.go:89] found id: "b0ae7be377f5c5dcad1ff7d6e445ad95929eb6075669cc7961cb3767c102d68b"
	I0327 19:20:14.075268  624194 cri.go:89] found id: "cd140d1478d9ccdc23dee2b9b8c295f878e464d6aa44e1650734b8f3ea9b3873"
	I0327 19:20:14.075274  624194 cri.go:89] found id: ""
	I0327 19:20:14.075282  624194 logs.go:276] 2 containers: [b0ae7be377f5c5dcad1ff7d6e445ad95929eb6075669cc7961cb3767c102d68b cd140d1478d9ccdc23dee2b9b8c295f878e464d6aa44e1650734b8f3ea9b3873]
	I0327 19:20:14.075364  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:14.079390  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:14.083016  624194 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0327 19:20:14.083089  624194 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0327 19:20:14.120658  624194 cri.go:89] found id: "a1ab61cd02a31b1d66671b1bf54236c68e3833cc817491464968e10a321b60dd"
	I0327 19:20:14.120682  624194 cri.go:89] found id: ""
	I0327 19:20:14.120690  624194 logs.go:276] 1 containers: [a1ab61cd02a31b1d66671b1bf54236c68e3833cc817491464968e10a321b60dd]
	I0327 19:20:14.120750  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:14.124436  624194 logs.go:123] Gathering logs for kube-apiserver [bc5840171099298e4c0b77c0501a1eef6567ec7bbfc62904a86686352caf6b11] ...
	I0327 19:20:14.124467  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc5840171099298e4c0b77c0501a1eef6567ec7bbfc62904a86686352caf6b11"
	I0327 19:20:14.174886  624194 logs.go:123] Gathering logs for kube-proxy [3fab233198555c1f02edca0fdc38f6dfe6b3567208b86d73760642838f0bb199] ...
	I0327 19:20:14.174923  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fab233198555c1f02edca0fdc38f6dfe6b3567208b86d73760642838f0bb199"
	I0327 19:20:14.219432  624194 logs.go:123] Gathering logs for kindnet [a1ab61cd02a31b1d66671b1bf54236c68e3833cc817491464968e10a321b60dd] ...
	I0327 19:20:14.219460  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1ab61cd02a31b1d66671b1bf54236c68e3833cc817491464968e10a321b60dd"
	I0327 19:20:14.260517  624194 logs.go:123] Gathering logs for container status ...
	I0327 19:20:14.260544  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 19:20:14.318741  624194 logs.go:123] Gathering logs for dmesg ...
	I0327 19:20:14.318770  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 19:20:14.340340  624194 logs.go:123] Gathering logs for describe nodes ...
	I0327 19:20:14.340373  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 19:20:14.659319  624194 logs.go:123] Gathering logs for kube-apiserver [2932945fc6e72cf714595f8fd974a9b497e5c5f8fd3b552fa7fb75e62b1f2d4a] ...
	I0327 19:20:14.659395  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2932945fc6e72cf714595f8fd974a9b497e5c5f8fd3b552fa7fb75e62b1f2d4a"
	I0327 19:20:14.699089  624194 logs.go:123] Gathering logs for etcd [380b4d5be923353cff4d9750074c5261b94dbc9a5437f0df351c602d85cc3884] ...
	I0327 19:20:14.699115  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380b4d5be923353cff4d9750074c5261b94dbc9a5437f0df351c602d85cc3884"
	I0327 19:20:14.768873  624194 logs.go:123] Gathering logs for kube-scheduler [574263b306aedb2bf16736cb6320603def79c2d366780ad7a6a0af9689cb059e] ...
	I0327 19:20:14.768907  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 574263b306aedb2bf16736cb6320603def79c2d366780ad7a6a0af9689cb059e"
	I0327 19:20:14.825713  624194 logs.go:123] Gathering logs for kube-scheduler [84d49a67ef1e2419f5830ee1add2f4adfd9eac7cf600add2577dbdc237b459b9] ...
	I0327 19:20:14.825745  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84d49a67ef1e2419f5830ee1add2f4adfd9eac7cf600add2577dbdc237b459b9"
	I0327 19:20:14.865557  624194 logs.go:123] Gathering logs for kube-controller-manager [cd140d1478d9ccdc23dee2b9b8c295f878e464d6aa44e1650734b8f3ea9b3873] ...
	I0327 19:20:14.865588  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd140d1478d9ccdc23dee2b9b8c295f878e464d6aa44e1650734b8f3ea9b3873"
	I0327 19:20:14.904744  624194 logs.go:123] Gathering logs for CRI-O ...
	I0327 19:20:14.904773  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0327 19:20:14.973669  624194 logs.go:123] Gathering logs for kubelet ...
	I0327 19:20:14.973703  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 19:20:15.064862  624194 logs.go:123] Gathering logs for kube-controller-manager [b0ae7be377f5c5dcad1ff7d6e445ad95929eb6075669cc7961cb3767c102d68b] ...
	I0327 19:20:15.064899  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b0ae7be377f5c5dcad1ff7d6e445ad95929eb6075669cc7961cb3767c102d68b"
	I0327 19:20:15.144120  624194 logs.go:123] Gathering logs for etcd [82750d28d9d8ada01f0f37077bcb30dd8433dd6e5ded339fd152df9074466ee0] ...
	I0327 19:20:15.144161  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82750d28d9d8ada01f0f37077bcb30dd8433dd6e5ded339fd152df9074466ee0"
	I0327 19:20:17.709332  624194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0327 19:20:17.718755  624194 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0327 19:20:17.718831  624194 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0327 19:20:17.718844  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:17.718853  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:17.718863  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:17.733428  624194 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0327 19:20:17.733713  624194 api_server.go:141] control plane version: v1.29.3
	I0327 19:20:17.733739  624194 api_server.go:131] duration metric: took 47.011610906s to wait for apiserver health ...
	I0327 19:20:17.733749  624194 system_pods.go:43] waiting for kube-system pods to appear ...
	I0327 19:20:17.733770  624194 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0327 19:20:17.733855  624194 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0327 19:20:17.778784  624194 cri.go:89] found id: "bc5840171099298e4c0b77c0501a1eef6567ec7bbfc62904a86686352caf6b11"
	I0327 19:20:17.778857  624194 cri.go:89] found id: "2932945fc6e72cf714595f8fd974a9b497e5c5f8fd3b552fa7fb75e62b1f2d4a"
	I0327 19:20:17.778868  624194 cri.go:89] found id: ""
	I0327 19:20:17.778877  624194 logs.go:276] 2 containers: [bc5840171099298e4c0b77c0501a1eef6567ec7bbfc62904a86686352caf6b11 2932945fc6e72cf714595f8fd974a9b497e5c5f8fd3b552fa7fb75e62b1f2d4a]
	I0327 19:20:17.778936  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:17.782915  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:17.786699  624194 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0327 19:20:17.786772  624194 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0327 19:20:17.826092  624194 cri.go:89] found id: "82750d28d9d8ada01f0f37077bcb30dd8433dd6e5ded339fd152df9074466ee0"
	I0327 19:20:17.826115  624194 cri.go:89] found id: "380b4d5be923353cff4d9750074c5261b94dbc9a5437f0df351c602d85cc3884"
	I0327 19:20:17.826120  624194 cri.go:89] found id: ""
	I0327 19:20:17.826128  624194 logs.go:276] 2 containers: [82750d28d9d8ada01f0f37077bcb30dd8433dd6e5ded339fd152df9074466ee0 380b4d5be923353cff4d9750074c5261b94dbc9a5437f0df351c602d85cc3884]
	I0327 19:20:17.826194  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:17.829857  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:17.833264  624194 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0327 19:20:17.833340  624194 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0327 19:20:17.870628  624194 cri.go:89] found id: ""
	I0327 19:20:17.870653  624194 logs.go:276] 0 containers: []
	W0327 19:20:17.870662  624194 logs.go:278] No container was found matching "coredns"
	I0327 19:20:17.870668  624194 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0327 19:20:17.870743  624194 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0327 19:20:17.911800  624194 cri.go:89] found id: "574263b306aedb2bf16736cb6320603def79c2d366780ad7a6a0af9689cb059e"
	I0327 19:20:17.911824  624194 cri.go:89] found id: "84d49a67ef1e2419f5830ee1add2f4adfd9eac7cf600add2577dbdc237b459b9"
	I0327 19:20:17.911829  624194 cri.go:89] found id: ""
	I0327 19:20:17.911836  624194 logs.go:276] 2 containers: [574263b306aedb2bf16736cb6320603def79c2d366780ad7a6a0af9689cb059e 84d49a67ef1e2419f5830ee1add2f4adfd9eac7cf600add2577dbdc237b459b9]
	I0327 19:20:17.911894  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:17.916019  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:17.919810  624194 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0327 19:20:17.919892  624194 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0327 19:20:17.960417  624194 cri.go:89] found id: "3fab233198555c1f02edca0fdc38f6dfe6b3567208b86d73760642838f0bb199"
	I0327 19:20:17.960441  624194 cri.go:89] found id: ""
	I0327 19:20:17.960453  624194 logs.go:276] 1 containers: [3fab233198555c1f02edca0fdc38f6dfe6b3567208b86d73760642838f0bb199]
	I0327 19:20:17.960539  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:17.964532  624194 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0327 19:20:17.964607  624194 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0327 19:20:18.020088  624194 cri.go:89] found id: "b0ae7be377f5c5dcad1ff7d6e445ad95929eb6075669cc7961cb3767c102d68b"
	I0327 19:20:18.020115  624194 cri.go:89] found id: "cd140d1478d9ccdc23dee2b9b8c295f878e464d6aa44e1650734b8f3ea9b3873"
	I0327 19:20:18.020121  624194 cri.go:89] found id: ""
	I0327 19:20:18.020137  624194 logs.go:276] 2 containers: [b0ae7be377f5c5dcad1ff7d6e445ad95929eb6075669cc7961cb3767c102d68b cd140d1478d9ccdc23dee2b9b8c295f878e464d6aa44e1650734b8f3ea9b3873]
	I0327 19:20:18.020217  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:18.024349  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:18.028446  624194 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0327 19:20:18.028543  624194 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0327 19:20:18.073497  624194 cri.go:89] found id: "a1ab61cd02a31b1d66671b1bf54236c68e3833cc817491464968e10a321b60dd"
	I0327 19:20:18.073518  624194 cri.go:89] found id: ""
	I0327 19:20:18.073527  624194 logs.go:276] 1 containers: [a1ab61cd02a31b1d66671b1bf54236c68e3833cc817491464968e10a321b60dd]
	I0327 19:20:18.073585  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:18.077768  624194 logs.go:123] Gathering logs for kubelet ...
	I0327 19:20:18.077795  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 19:20:18.154577  624194 logs.go:123] Gathering logs for etcd [380b4d5be923353cff4d9750074c5261b94dbc9a5437f0df351c602d85cc3884] ...
	I0327 19:20:18.154616  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380b4d5be923353cff4d9750074c5261b94dbc9a5437f0df351c602d85cc3884"
	I0327 19:20:18.231889  624194 logs.go:123] Gathering logs for kindnet [a1ab61cd02a31b1d66671b1bf54236c68e3833cc817491464968e10a321b60dd] ...
	I0327 19:20:18.231932  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1ab61cd02a31b1d66671b1bf54236c68e3833cc817491464968e10a321b60dd"
	I0327 19:20:18.271711  624194 logs.go:123] Gathering logs for CRI-O ...
	I0327 19:20:18.271741  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0327 19:20:18.341610  624194 logs.go:123] Gathering logs for dmesg ...
	I0327 19:20:18.341647  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 19:20:18.362718  624194 logs.go:123] Gathering logs for describe nodes ...
	I0327 19:20:18.362747  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 19:20:18.632926  624194 logs.go:123] Gathering logs for kube-apiserver [bc5840171099298e4c0b77c0501a1eef6567ec7bbfc62904a86686352caf6b11] ...
	I0327 19:20:18.632969  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc5840171099298e4c0b77c0501a1eef6567ec7bbfc62904a86686352caf6b11"
	I0327 19:20:18.692996  624194 logs.go:123] Gathering logs for kube-apiserver [2932945fc6e72cf714595f8fd974a9b497e5c5f8fd3b552fa7fb75e62b1f2d4a] ...
	I0327 19:20:18.693034  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2932945fc6e72cf714595f8fd974a9b497e5c5f8fd3b552fa7fb75e62b1f2d4a"
	I0327 19:20:18.734513  624194 logs.go:123] Gathering logs for etcd [82750d28d9d8ada01f0f37077bcb30dd8433dd6e5ded339fd152df9074466ee0] ...
	I0327 19:20:18.734543  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82750d28d9d8ada01f0f37077bcb30dd8433dd6e5ded339fd152df9074466ee0"
	I0327 19:20:18.789928  624194 logs.go:123] Gathering logs for kube-scheduler [84d49a67ef1e2419f5830ee1add2f4adfd9eac7cf600add2577dbdc237b459b9] ...
	I0327 19:20:18.790010  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84d49a67ef1e2419f5830ee1add2f4adfd9eac7cf600add2577dbdc237b459b9"
	I0327 19:20:18.828465  624194 logs.go:123] Gathering logs for kube-proxy [3fab233198555c1f02edca0fdc38f6dfe6b3567208b86d73760642838f0bb199] ...
	I0327 19:20:18.828543  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fab233198555c1f02edca0fdc38f6dfe6b3567208b86d73760642838f0bb199"
	I0327 19:20:18.870594  624194 logs.go:123] Gathering logs for kube-controller-manager [cd140d1478d9ccdc23dee2b9b8c295f878e464d6aa44e1650734b8f3ea9b3873] ...
	I0327 19:20:18.870628  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd140d1478d9ccdc23dee2b9b8c295f878e464d6aa44e1650734b8f3ea9b3873"
	I0327 19:20:18.910549  624194 logs.go:123] Gathering logs for container status ...
	I0327 19:20:18.910580  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 19:20:18.970931  624194 logs.go:123] Gathering logs for kube-scheduler [574263b306aedb2bf16736cb6320603def79c2d366780ad7a6a0af9689cb059e] ...
	I0327 19:20:18.970962  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 574263b306aedb2bf16736cb6320603def79c2d366780ad7a6a0af9689cb059e"
	I0327 19:20:19.036701  624194 logs.go:123] Gathering logs for kube-controller-manager [b0ae7be377f5c5dcad1ff7d6e445ad95929eb6075669cc7961cb3767c102d68b] ...
	I0327 19:20:19.036733  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b0ae7be377f5c5dcad1ff7d6e445ad95929eb6075669cc7961cb3767c102d68b"
	I0327 19:20:21.606009  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0327 19:20:21.606036  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:21.606045  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:21.606051  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:21.613739  624194 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0327 19:20:21.621198  624194 system_pods.go:59] 19 kube-system pods found
	I0327 19:20:21.621252  624194 system_pods.go:61] "coredns-76f75df574-cq2vx" [c0b212c8-0d5d-4e1c-923b-3b4fe61eda74] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0327 19:20:21.621262  624194 system_pods.go:61] "coredns-76f75df574-knk2g" [3386695d-c8c7-4e4a-bf1c-b44e6e38a603] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0327 19:20:21.621270  624194 system_pods.go:61] "etcd-ha-738145" [23447c19-8537-40e0-98dc-ef3a378caf03] Running
	I0327 19:20:21.621275  624194 system_pods.go:61] "etcd-ha-738145-m02" [ff44e88e-6f62-49ce-8839-476f548c6f78] Running
	I0327 19:20:21.621280  624194 system_pods.go:61] "kindnet-66mtx" [ad2a4803-f180-46e5-9308-c809615cdf30] Running
	I0327 19:20:21.621285  624194 system_pods.go:61] "kindnet-n7v2f" [ae6aba2a-5a54-4616-95d7-dc69f582cb0a] Running
	I0327 19:20:21.621289  624194 system_pods.go:61] "kindnet-wnwtz" [78816488-facc-4b70-8898-6d3999956227] Running
	I0327 19:20:21.621295  624194 system_pods.go:61] "kube-apiserver-ha-738145" [c8ed10de-85e5-4be7-8bc3-f84afd0f21dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0327 19:20:21.621306  624194 system_pods.go:61] "kube-apiserver-ha-738145-m02" [21e047eb-feab-44ed-8cdd-37104601c554] Running
	I0327 19:20:21.621313  624194 system_pods.go:61] "kube-controller-manager-ha-738145" [1ab86d45-c528-431f-87a1-710ff3c76789] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0327 19:20:21.621324  624194 system_pods.go:61] "kube-controller-manager-ha-738145-m02" [65243816-3d23-449b-bb19-820a21890766] Running
	I0327 19:20:21.621328  624194 system_pods.go:61] "kube-proxy-b8vjf" [3ede54ed-3436-4059-a887-d4fe9a27e5d1] Running
	I0327 19:20:21.621332  624194 system_pods.go:61] "kube-proxy-fh7bn" [fab842d4-7040-4e53-a907-4bb0363a7d41] Running
	I0327 19:20:21.621342  624194 system_pods.go:61] "kube-proxy-vgfbw" [0c62459e-0d8f-4409-b0da-622bdd5c31ee] Running
	I0327 19:20:21.621346  624194 system_pods.go:61] "kube-scheduler-ha-738145" [7b3d078b-808d-43e6-9fc7-1775369e771c] Running
	I0327 19:20:21.621350  624194 system_pods.go:61] "kube-scheduler-ha-738145-m02" [b4717660-2fd9-4e17-b58f-0ed41a769f85] Running
	I0327 19:20:21.621354  624194 system_pods.go:61] "kube-vip-ha-738145" [7f373cbe-1b1c-4079-9efa-84ff718853d0] Running
	I0327 19:20:21.621358  624194 system_pods.go:61] "kube-vip-ha-738145-m02" [ec937e5f-bb84-456f-9af0-083bb07b03b4] Running
	I0327 19:20:21.621367  624194 system_pods.go:61] "storage-provisioner" [da4a446f-dfc0-4993-a56a-c30ef4d63446] Running
	I0327 19:20:21.621373  624194 system_pods.go:74] duration metric: took 3.887618642s to wait for pod list to return data ...
	I0327 19:20:21.621381  624194 default_sa.go:34] waiting for default service account to be created ...
	I0327 19:20:21.621461  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0327 19:20:21.621473  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:21.621482  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:21.621488  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:21.627708  624194 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0327 19:20:21.628072  624194 default_sa.go:45] found service account: "default"
	I0327 19:20:21.628096  624194 default_sa.go:55] duration metric: took 6.706376ms for default service account to be created ...
	I0327 19:20:21.628110  624194 system_pods.go:116] waiting for k8s-apps to be running ...
	I0327 19:20:21.628183  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0327 19:20:21.628203  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:21.628212  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:21.628215  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:21.634289  624194 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0327 19:20:21.643126  624194 system_pods.go:86] 19 kube-system pods found
	I0327 19:20:21.643172  624194 system_pods.go:89] "coredns-76f75df574-cq2vx" [c0b212c8-0d5d-4e1c-923b-3b4fe61eda74] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0327 19:20:21.643184  624194 system_pods.go:89] "coredns-76f75df574-knk2g" [3386695d-c8c7-4e4a-bf1c-b44e6e38a603] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0327 19:20:21.643194  624194 system_pods.go:89] "etcd-ha-738145" [23447c19-8537-40e0-98dc-ef3a378caf03] Running
	I0327 19:20:21.643200  624194 system_pods.go:89] "etcd-ha-738145-m02" [ff44e88e-6f62-49ce-8839-476f548c6f78] Running
	I0327 19:20:21.643205  624194 system_pods.go:89] "kindnet-66mtx" [ad2a4803-f180-46e5-9308-c809615cdf30] Running
	I0327 19:20:21.643209  624194 system_pods.go:89] "kindnet-n7v2f" [ae6aba2a-5a54-4616-95d7-dc69f582cb0a] Running
	I0327 19:20:21.643215  624194 system_pods.go:89] "kindnet-wnwtz" [78816488-facc-4b70-8898-6d3999956227] Running
	I0327 19:20:21.643223  624194 system_pods.go:89] "kube-apiserver-ha-738145" [c8ed10de-85e5-4be7-8bc3-f84afd0f21dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0327 19:20:21.643232  624194 system_pods.go:89] "kube-apiserver-ha-738145-m02" [21e047eb-feab-44ed-8cdd-37104601c554] Running
	I0327 19:20:21.643240  624194 system_pods.go:89] "kube-controller-manager-ha-738145" [1ab86d45-c528-431f-87a1-710ff3c76789] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0327 19:20:21.643245  624194 system_pods.go:89] "kube-controller-manager-ha-738145-m02" [65243816-3d23-449b-bb19-820a21890766] Running
	I0327 19:20:21.643250  624194 system_pods.go:89] "kube-proxy-b8vjf" [3ede54ed-3436-4059-a887-d4fe9a27e5d1] Running
	I0327 19:20:21.643259  624194 system_pods.go:89] "kube-proxy-fh7bn" [fab842d4-7040-4e53-a907-4bb0363a7d41] Running
	I0327 19:20:21.643263  624194 system_pods.go:89] "kube-proxy-vgfbw" [0c62459e-0d8f-4409-b0da-622bdd5c31ee] Running
	I0327 19:20:21.643267  624194 system_pods.go:89] "kube-scheduler-ha-738145" [7b3d078b-808d-43e6-9fc7-1775369e771c] Running
	I0327 19:20:21.643274  624194 system_pods.go:89] "kube-scheduler-ha-738145-m02" [b4717660-2fd9-4e17-b58f-0ed41a769f85] Running
	I0327 19:20:21.643278  624194 system_pods.go:89] "kube-vip-ha-738145" [7f373cbe-1b1c-4079-9efa-84ff718853d0] Running
	I0327 19:20:21.643282  624194 system_pods.go:89] "kube-vip-ha-738145-m02" [ec937e5f-bb84-456f-9af0-083bb07b03b4] Running
	I0327 19:20:21.643286  624194 system_pods.go:89] "storage-provisioner" [da4a446f-dfc0-4993-a56a-c30ef4d63446] Running
	I0327 19:20:21.643293  624194 system_pods.go:126] duration metric: took 15.172653ms to wait for k8s-apps to be running ...
	I0327 19:20:21.643309  624194 system_svc.go:44] waiting for kubelet service to be running ....
	I0327 19:20:21.643370  624194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 19:20:21.658352  624194 system_svc.go:56] duration metric: took 15.023632ms WaitForService to wait for kubelet
	I0327 19:20:21.658426  624194 kubeadm.go:576] duration metric: took 1m12.835424489s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 19:20:21.658464  624194 node_conditions.go:102] verifying NodePressure condition ...
	I0327 19:20:21.658576  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0327 19:20:21.658605  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:21.658627  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:21.658646  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:21.662364  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:21.664263  624194 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0327 19:20:21.664339  624194 node_conditions.go:123] node cpu capacity is 2
	I0327 19:20:21.664366  624194 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0327 19:20:21.664389  624194 node_conditions.go:123] node cpu capacity is 2
	I0327 19:20:21.664424  624194 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0327 19:20:21.664449  624194 node_conditions.go:123] node cpu capacity is 2
	I0327 19:20:21.664471  624194 node_conditions.go:105] duration metric: took 5.97088ms to run NodePressure ...
	I0327 19:20:21.664512  624194 start.go:240] waiting for startup goroutines ...
	I0327 19:20:21.664555  624194 start.go:254] writing updated cluster config ...
	I0327 19:20:21.667672  624194 out.go:177] 
	I0327 19:20:21.670632  624194 config.go:182] Loaded profile config "ha-738145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 19:20:21.670750  624194 profile.go:142] Saving config to /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/config.json ...
	I0327 19:20:21.673845  624194 out.go:177] * Starting "ha-738145-m04" worker node in "ha-738145" cluster
	I0327 19:20:21.677199  624194 cache.go:121] Beginning downloading kic base image for docker with crio
	I0327 19:20:21.680319  624194 out.go:177] * Pulling base image v0.0.43-beta.0 ...
	I0327 19:20:21.682790  624194 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0327 19:20:21.682824  624194 cache.go:56] Caching tarball of preloaded images
	I0327 19:20:21.682875  624194 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local docker daemon
	I0327 19:20:21.682923  624194 preload.go:173] Found /home/jenkins/minikube-integration/18517-562206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0327 19:20:21.682933  624194 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0327 19:20:21.683054  624194 profile.go:142] Saving config to /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/config.json ...
	I0327 19:20:21.697191  624194 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local docker daemon, skipping pull
	I0327 19:20:21.697272  624194 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 exists in daemon, skipping load
	I0327 19:20:21.697298  624194 cache.go:194] Successfully downloaded all kic artifacts
	I0327 19:20:21.697336  624194 start.go:360] acquireMachinesLock for ha-738145-m04: {Name:mk3d192434f167d4dd22cbec720b21f4cbf265ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 19:20:21.697409  624194 start.go:364] duration metric: took 55.171µs to acquireMachinesLock for "ha-738145-m04"
	I0327 19:20:21.697430  624194 start.go:96] Skipping create...Using existing machine configuration
	I0327 19:20:21.697435  624194 fix.go:54] fixHost starting: m04
	I0327 19:20:21.697702  624194 cli_runner.go:164] Run: docker container inspect ha-738145-m04 --format={{.State.Status}}
	I0327 19:20:21.721180  624194 fix.go:112] recreateIfNeeded on ha-738145-m04: state=Stopped err=<nil>
	W0327 19:20:21.721211  624194 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 19:20:21.724239  624194 out.go:177] * Restarting existing docker container for "ha-738145-m04" ...
	I0327 19:20:21.727128  624194 cli_runner.go:164] Run: docker start ha-738145-m04
	I0327 19:20:22.007059  624194 cli_runner.go:164] Run: docker container inspect ha-738145-m04 --format={{.State.Status}}
	I0327 19:20:22.033566  624194 kic.go:430] container "ha-738145-m04" state is running.
	I0327 19:20:22.034086  624194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-738145-m04
	I0327 19:20:22.059573  624194 profile.go:142] Saving config to /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/config.json ...
	I0327 19:20:22.059999  624194 machine.go:94] provisionDockerMachine start ...
	I0327 19:20:22.060083  624194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145-m04
	I0327 19:20:22.080396  624194 main.go:141] libmachine: Using SSH client type: native
	I0327 19:20:22.080707  624194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33588 <nil> <nil>}
	I0327 19:20:22.080719  624194 main.go:141] libmachine: About to run SSH command:
	hostname
	I0327 19:20:22.081481  624194 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0327 19:20:25.210658  624194 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-738145-m04
	
	I0327 19:20:25.210681  624194 ubuntu.go:169] provisioning hostname "ha-738145-m04"
	I0327 19:20:25.210789  624194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145-m04
	I0327 19:20:25.246811  624194 main.go:141] libmachine: Using SSH client type: native
	I0327 19:20:25.247050  624194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33588 <nil> <nil>}
	I0327 19:20:25.247069  624194 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-738145-m04 && echo "ha-738145-m04" | sudo tee /etc/hostname
	I0327 19:20:25.391478  624194 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-738145-m04
	
	I0327 19:20:25.391562  624194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145-m04
	I0327 19:20:25.409898  624194 main.go:141] libmachine: Using SSH client type: native
	I0327 19:20:25.410398  624194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33588 <nil> <nil>}
	I0327 19:20:25.410420  624194 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-738145-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-738145-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-738145-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0327 19:20:25.546174  624194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 19:20:25.546206  624194 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18517-562206/.minikube CaCertPath:/home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18517-562206/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18517-562206/.minikube}
	I0327 19:20:25.546224  624194 ubuntu.go:177] setting up certificates
	I0327 19:20:25.546233  624194 provision.go:84] configureAuth start
	I0327 19:20:25.546294  624194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-738145-m04
	I0327 19:20:25.581342  624194 provision.go:143] copyHostCerts
	I0327 19:20:25.581389  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18517-562206/.minikube/ca.pem
	I0327 19:20:25.581441  624194 exec_runner.go:144] found /home/jenkins/minikube-integration/18517-562206/.minikube/ca.pem, removing ...
	I0327 19:20:25.581454  624194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18517-562206/.minikube/ca.pem
	I0327 19:20:25.581539  624194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18517-562206/.minikube/ca.pem (1082 bytes)
	I0327 19:20:25.581627  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18517-562206/.minikube/cert.pem
	I0327 19:20:25.581649  624194 exec_runner.go:144] found /home/jenkins/minikube-integration/18517-562206/.minikube/cert.pem, removing ...
	I0327 19:20:25.581654  624194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18517-562206/.minikube/cert.pem
	I0327 19:20:25.581686  624194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18517-562206/.minikube/cert.pem (1123 bytes)
	I0327 19:20:25.581771  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18517-562206/.minikube/key.pem
	I0327 19:20:25.581793  624194 exec_runner.go:144] found /home/jenkins/minikube-integration/18517-562206/.minikube/key.pem, removing ...
	I0327 19:20:25.581798  624194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18517-562206/.minikube/key.pem
	I0327 19:20:25.581823  624194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18517-562206/.minikube/key.pem (1679 bytes)
	I0327 19:20:25.581875  624194 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18517-562206/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca-key.pem org=jenkins.ha-738145-m04 san=[127.0.0.1 192.168.49.5 ha-738145-m04 localhost minikube]
	I0327 19:20:26.064298  624194 provision.go:177] copyRemoteCerts
	I0327 19:20:26.064373  624194 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0327 19:20:26.064412  624194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145-m04
	I0327 19:20:26.079991  624194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33588 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/ha-738145-m04/id_rsa Username:docker}
	I0327 19:20:26.176712  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0327 19:20:26.176784  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0327 19:20:26.205825  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0327 19:20:26.205887  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0327 19:20:26.231989  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0327 19:20:26.232060  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0327 19:20:26.258310  624194 provision.go:87] duration metric: took 712.062853ms to configureAuth
	I0327 19:20:26.258391  624194 ubuntu.go:193] setting minikube options for container-runtime
	I0327 19:20:26.258627  624194 config.go:182] Loaded profile config "ha-738145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 19:20:26.258751  624194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145-m04
	I0327 19:20:26.274082  624194 main.go:141] libmachine: Using SSH client type: native
	I0327 19:20:26.274337  624194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33588 <nil> <nil>}
	I0327 19:20:26.274360  624194 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0327 19:20:26.550941  624194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0327 19:20:26.550962  624194 machine.go:97] duration metric: took 4.490949536s to provisionDockerMachine
	I0327 19:20:26.550974  624194 start.go:293] postStartSetup for "ha-738145-m04" (driver="docker")
	I0327 19:20:26.550986  624194 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0327 19:20:26.551052  624194 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0327 19:20:26.551100  624194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145-m04
	I0327 19:20:26.570019  624194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33588 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/ha-738145-m04/id_rsa Username:docker}
	I0327 19:20:26.663172  624194 ssh_runner.go:195] Run: cat /etc/os-release
	I0327 19:20:26.666322  624194 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0327 19:20:26.666357  624194 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0327 19:20:26.666368  624194 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0327 19:20:26.666375  624194 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0327 19:20:26.666386  624194 filesync.go:126] Scanning /home/jenkins/minikube-integration/18517-562206/.minikube/addons for local assets ...
	I0327 19:20:26.666448  624194 filesync.go:126] Scanning /home/jenkins/minikube-integration/18517-562206/.minikube/files for local assets ...
	I0327 19:20:26.666529  624194 filesync.go:149] local asset: /home/jenkins/minikube-integration/18517-562206/.minikube/files/etc/ssl/certs/5676232.pem -> 5676232.pem in /etc/ssl/certs
	I0327 19:20:26.666540  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/files/etc/ssl/certs/5676232.pem -> /etc/ssl/certs/5676232.pem
	I0327 19:20:26.666637  624194 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0327 19:20:26.675452  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/files/etc/ssl/certs/5676232.pem --> /etc/ssl/certs/5676232.pem (1708 bytes)
	I0327 19:20:26.701188  624194 start.go:296] duration metric: took 150.198954ms for postStartSetup
	I0327 19:20:26.701316  624194 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0327 19:20:26.701389  624194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145-m04
	I0327 19:20:26.717283  624194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33588 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/ha-738145-m04/id_rsa Username:docker}
	I0327 19:20:26.803034  624194 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0327 19:20:26.807893  624194 fix.go:56] duration metric: took 5.11045082s for fixHost
	I0327 19:20:26.807917  624194 start.go:83] releasing machines lock for "ha-738145-m04", held for 5.110497277s
	I0327 19:20:26.807986  624194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-738145-m04
	I0327 19:20:26.832695  624194 out.go:177] * Found network options:
	I0327 19:20:26.834637  624194 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0327 19:20:26.836692  624194 proxy.go:119] fail to check proxy env: Error ip not in block
	W0327 19:20:26.836716  624194 proxy.go:119] fail to check proxy env: Error ip not in block
	W0327 19:20:26.836738  624194 proxy.go:119] fail to check proxy env: Error ip not in block
	W0327 19:20:26.836755  624194 proxy.go:119] fail to check proxy env: Error ip not in block
	I0327 19:20:26.836824  624194 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0327 19:20:26.836869  624194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145-m04
	I0327 19:20:26.837134  624194 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0327 19:20:26.837186  624194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145-m04
	I0327 19:20:26.857406  624194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33588 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/ha-738145-m04/id_rsa Username:docker}
	I0327 19:20:26.865865  624194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33588 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/ha-738145-m04/id_rsa Username:docker}
	I0327 19:20:27.101558  624194 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0327 19:20:27.106403  624194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0327 19:20:27.115639  624194 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0327 19:20:27.115724  624194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0327 19:20:27.125425  624194 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0327 19:20:27.125448  624194 start.go:494] detecting cgroup driver to use...
	I0327 19:20:27.125483  624194 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0327 19:20:27.125534  624194 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0327 19:20:27.146899  624194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 19:20:27.159191  624194 docker.go:217] disabling cri-docker service (if available) ...
	I0327 19:20:27.159257  624194 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0327 19:20:27.174497  624194 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0327 19:20:27.186920  624194 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0327 19:20:27.288962  624194 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0327 19:20:27.386902  624194 docker.go:233] disabling docker service ...
	I0327 19:20:27.387022  624194 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0327 19:20:27.402559  624194 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0327 19:20:27.416809  624194 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0327 19:20:27.519795  624194 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0327 19:20:27.621101  624194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0327 19:20:27.638953  624194 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 19:20:27.656403  624194 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0327 19:20:27.656476  624194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 19:20:27.670861  624194 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0327 19:20:27.670933  624194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 19:20:27.683087  624194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 19:20:27.698403  624194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 19:20:27.718963  624194 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0327 19:20:27.728537  624194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 19:20:27.744587  624194 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 19:20:27.754913  624194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0327 19:20:27.766945  624194 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0327 19:20:27.778176  624194 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0327 19:20:27.789444  624194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 19:20:27.880612  624194 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0327 19:20:28.014255  624194 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0327 19:20:28.014382  624194 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0327 19:20:28.018755  624194 start.go:562] Will wait 60s for crictl version
	I0327 19:20:28.018847  624194 ssh_runner.go:195] Run: which crictl
	I0327 19:20:28.024392  624194 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0327 19:20:28.072399  624194 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0327 19:20:28.072497  624194 ssh_runner.go:195] Run: crio --version
	I0327 19:20:28.122036  624194 ssh_runner.go:195] Run: crio --version
	I0327 19:20:28.164365  624194 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.24.6 ...
	I0327 19:20:28.166861  624194 out.go:177]   - env NO_PROXY=192.168.49.2
	I0327 19:20:28.169078  624194 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0327 19:20:28.171762  624194 cli_runner.go:164] Run: docker network inspect ha-738145 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0327 19:20:28.188946  624194 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0327 19:20:28.193564  624194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 19:20:28.205767  624194 mustload.go:65] Loading cluster: ha-738145
	I0327 19:20:28.206087  624194 config.go:182] Loaded profile config "ha-738145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 19:20:28.206340  624194 cli_runner.go:164] Run: docker container inspect ha-738145 --format={{.State.Status}}
	I0327 19:20:28.221591  624194 host.go:66] Checking if "ha-738145" exists ...
	I0327 19:20:28.221877  624194 certs.go:68] Setting up /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145 for IP: 192.168.49.5
	I0327 19:20:28.221887  624194 certs.go:194] generating shared ca certs ...
	I0327 19:20:28.222008  624194 certs.go:226] acquiring lock for ca certs: {Name:mk95afc777a0fafcf19d589f4cbc5a374d1fe472 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 19:20:28.222142  624194 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18517-562206/.minikube/ca.key
	I0327 19:20:28.222182  624194 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18517-562206/.minikube/proxy-client-ca.key
	I0327 19:20:28.222194  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0327 19:20:28.222207  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0327 19:20:28.222218  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0327 19:20:28.222229  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0327 19:20:28.222282  624194 certs.go:484] found cert: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/567623.pem (1338 bytes)
	W0327 19:20:28.222311  624194 certs.go:480] ignoring /home/jenkins/minikube-integration/18517-562206/.minikube/certs/567623_empty.pem, impossibly tiny 0 bytes
	I0327 19:20:28.222321  624194 certs.go:484] found cert: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca-key.pem (1679 bytes)
	I0327 19:20:28.222345  624194 certs.go:484] found cert: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/ca.pem (1082 bytes)
	I0327 19:20:28.222370  624194 certs.go:484] found cert: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/cert.pem (1123 bytes)
	I0327 19:20:28.222394  624194 certs.go:484] found cert: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/key.pem (1679 bytes)
	I0327 19:20:28.222435  624194 certs.go:484] found cert: /home/jenkins/minikube-integration/18517-562206/.minikube/files/etc/ssl/certs/5676232.pem (1708 bytes)
	I0327 19:20:28.222470  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0327 19:20:28.222493  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/certs/567623.pem -> /usr/share/ca-certificates/567623.pem
	I0327 19:20:28.222504  624194 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18517-562206/.minikube/files/etc/ssl/certs/5676232.pem -> /usr/share/ca-certificates/5676232.pem
	I0327 19:20:28.222531  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0327 19:20:28.251658  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0327 19:20:28.277944  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0327 19:20:28.306051  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0327 19:20:28.332502  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0327 19:20:28.360172  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/certs/567623.pem --> /usr/share/ca-certificates/567623.pem (1338 bytes)
	I0327 19:20:28.385057  624194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18517-562206/.minikube/files/etc/ssl/certs/5676232.pem --> /usr/share/ca-certificates/5676232.pem (1708 bytes)
	I0327 19:20:28.414416  624194 ssh_runner.go:195] Run: openssl version
	I0327 19:20:28.420223  624194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0327 19:20:28.430228  624194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0327 19:20:28.433955  624194 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 18:59 /usr/share/ca-certificates/minikubeCA.pem
	I0327 19:20:28.434021  624194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0327 19:20:28.440860  624194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0327 19:20:28.450062  624194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/567623.pem && ln -fs /usr/share/ca-certificates/567623.pem /etc/ssl/certs/567623.pem"
	I0327 19:20:28.459797  624194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/567623.pem
	I0327 19:20:28.463284  624194 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 19:06 /usr/share/ca-certificates/567623.pem
	I0327 19:20:28.463391  624194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/567623.pem
	I0327 19:20:28.470071  624194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/567623.pem /etc/ssl/certs/51391683.0"
	I0327 19:20:28.479552  624194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5676232.pem && ln -fs /usr/share/ca-certificates/5676232.pem /etc/ssl/certs/5676232.pem"
	I0327 19:20:28.489249  624194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5676232.pem
	I0327 19:20:28.492741  624194 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 19:06 /usr/share/ca-certificates/5676232.pem
	I0327 19:20:28.492830  624194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5676232.pem
	I0327 19:20:28.501053  624194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5676232.pem /etc/ssl/certs/3ec20f2e.0"
	I0327 19:20:28.510677  624194 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0327 19:20:28.514647  624194 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0327 19:20:28.514693  624194 kubeadm.go:928] updating node {m04 192.168.49.5 0 v1.29.3  false true} ...
	I0327 19:20:28.514857  624194 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-738145-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-738145 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0327 19:20:28.514931  624194 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0327 19:20:28.526370  624194 binaries.go:44] Found k8s binaries, skipping transfer
	I0327 19:20:28.526441  624194 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0327 19:20:28.535233  624194 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0327 19:20:28.556805  624194 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0327 19:20:28.576850  624194 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0327 19:20:28.580651  624194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 19:20:28.591971  624194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 19:20:28.681561  624194 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 19:20:28.694357  624194 start.go:234] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}
	I0327 19:20:28.699555  624194 out.go:177] * Verifying Kubernetes components...
	I0327 19:20:28.694658  624194 config.go:182] Loaded profile config "ha-738145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 19:20:28.702115  624194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 19:20:28.824110  624194 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 19:20:28.839732  624194 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18517-562206/kubeconfig
	I0327 19:20:28.840102  624194 kapi.go:59] client config for ha-738145: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/client.crt", KeyFile:"/home/jenkins/minikube-integration/18517-562206/.minikube/profiles/ha-738145/client.key", CAFile:"/home/jenkins/minikube-integration/18517-562206/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1700360), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0327 19:20:28.840211  624194 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0327 19:20:28.840475  624194 node_ready.go:35] waiting up to 6m0s for node "ha-738145-m04" to be "Ready" ...
	I0327 19:20:28.840605  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m04
	I0327 19:20:28.840630  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:28.840665  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:28.840687  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:28.849078  624194 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0327 19:20:29.341431  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m04
	I0327 19:20:29.341460  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:29.341470  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:29.341482  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:29.344716  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:29.841461  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m04
	I0327 19:20:29.841522  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:29.841549  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:29.841569  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:29.846753  624194 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 19:20:30.340746  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m04
	I0327 19:20:30.340825  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:30.340850  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:30.340870  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:30.344390  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:30.841484  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m04
	I0327 19:20:30.841548  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:30.841572  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:30.841593  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:30.844810  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:30.846066  624194 node_ready.go:53] node "ha-738145-m04" has status "Ready":"Unknown"
	I0327 19:20:31.341371  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m04
	I0327 19:20:31.341396  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:31.341406  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:31.341412  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:31.344393  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:20:31.841072  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m04
	I0327 19:20:31.841096  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:31.841106  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:31.841112  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:31.844052  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:20:32.341619  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m04
	I0327 19:20:32.341643  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:32.341653  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:32.341659  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:32.344781  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:32.841368  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m04
	I0327 19:20:32.841389  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:32.841399  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:32.841404  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:32.845300  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:33.341285  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m04
	I0327 19:20:33.341307  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:33.341317  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:33.341321  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:33.344187  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:20:33.344988  624194 node_ready.go:53] node "ha-738145-m04" has status "Ready":"Unknown"
	I0327 19:20:33.841150  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m04
	I0327 19:20:33.841174  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:33.841182  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:33.841185  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:33.844248  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:34.340762  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m04
	I0327 19:20:34.340786  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:34.340796  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:34.340802  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:34.343862  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:34.841263  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m04
	I0327 19:20:34.841285  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:34.841295  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:34.841301  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:34.844209  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:20:35.340661  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m04
	I0327 19:20:35.340685  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:35.340695  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:35.340700  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:35.344132  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:35.345226  624194 node_ready.go:53] node "ha-738145-m04" has status "Ready":"Unknown"
	I0327 19:20:35.840729  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m04
	I0327 19:20:35.840755  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:35.840765  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:35.840770  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:35.843581  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:20:35.844231  624194 node_ready.go:49] node "ha-738145-m04" has status "Ready":"True"
	I0327 19:20:35.844252  624194 node_ready.go:38] duration metric: took 7.003730106s for node "ha-738145-m04" to be "Ready" ...
	I0327 19:20:35.844263  624194 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 19:20:35.844334  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0327 19:20:35.844345  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:35.844353  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:35.844356  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:35.849968  624194 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 19:20:35.857161  624194 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-cq2vx" in "kube-system" namespace to be "Ready" ...
	I0327 19:20:35.857286  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-cq2vx
	I0327 19:20:35.857299  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:35.857308  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:35.857312  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:35.860170  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:20:35.861011  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:20:35.861029  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:35.861039  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:35.861043  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:35.863816  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:20:36.357441  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-cq2vx
	I0327 19:20:36.357462  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:36.357472  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:36.357477  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:36.360639  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:36.361485  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:20:36.361500  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:36.361510  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:36.361516  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:36.364383  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:20:36.857414  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-cq2vx
	I0327 19:20:36.857439  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:36.857450  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:36.857454  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:36.861064  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:36.861722  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:20:36.861734  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:36.861743  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:36.861749  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:36.864604  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:20:37.357521  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-cq2vx
	I0327 19:20:37.357545  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:37.357556  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:37.357560  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:37.360640  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:37.361557  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:20:37.361576  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:37.361585  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:37.361589  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:37.364364  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:20:37.857684  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-cq2vx
	I0327 19:20:37.857705  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:37.857715  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:37.857725  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:37.860850  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:37.861875  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:20:37.861896  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:37.861929  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:37.861936  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:37.864529  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:20:37.865075  624194 pod_ready.go:102] pod "coredns-76f75df574-cq2vx" in "kube-system" namespace has status "Ready":"False"
	I0327 19:20:38.358321  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-cq2vx
	I0327 19:20:38.358342  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:38.358358  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:38.358363  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:38.361793  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:38.362485  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:20:38.362506  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:38.362515  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:38.362519  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:38.365238  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:20:38.857611  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-cq2vx
	I0327 19:20:38.857631  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:38.857640  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:38.857645  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:38.860597  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:20:38.861520  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:20:38.861540  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:38.861549  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:38.861553  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:38.864227  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:20:39.357423  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-cq2vx
	I0327 19:20:39.357446  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:39.357456  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:39.357461  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:39.360831  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:39.361941  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:20:39.361962  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:39.361971  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:39.361976  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:39.364731  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:20:39.857951  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-cq2vx
	I0327 19:20:39.857973  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:39.857982  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:39.857988  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:39.861054  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:39.861809  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:20:39.861827  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:39.861837  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:39.861842  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:39.864517  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:20:39.865116  624194 pod_ready.go:102] pod "coredns-76f75df574-cq2vx" in "kube-system" namespace has status "Ready":"False"
	I0327 19:20:40.357637  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-cq2vx
	I0327 19:20:40.357660  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:40.357670  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:40.357677  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:40.360750  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:40.361586  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:20:40.361604  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:40.361612  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:40.361616  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:40.364429  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:20:40.857392  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-cq2vx
	I0327 19:20:40.857416  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:40.857425  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:40.857431  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:40.860588  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:40.861383  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:20:40.861402  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:40.861411  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:40.861415  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:40.864123  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:20:41.358150  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-cq2vx
	I0327 19:20:41.358184  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:41.358194  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:41.358199  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:41.362965  624194 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 19:20:41.363690  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:20:41.363709  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:41.363719  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:41.363724  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:41.368187  624194 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 19:20:41.857404  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-cq2vx
	I0327 19:20:41.857432  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:41.857442  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:41.857447  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:41.862816  624194 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 19:20:41.863865  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:20:41.863886  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:41.863895  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:41.863899  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:41.869693  624194 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 19:20:41.870281  624194 pod_ready.go:102] pod "coredns-76f75df574-cq2vx" in "kube-system" namespace has status "Ready":"False"
	I0327 19:20:42.358010  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-cq2vx
	I0327 19:20:42.358030  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:42.358040  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:42.358045  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:42.361396  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:42.362361  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:20:42.362382  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:42.362392  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:42.362396  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:42.365306  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:20:42.857402  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-cq2vx
	I0327 19:20:42.857421  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:42.857430  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:42.857433  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:42.862057  624194 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 19:20:42.862764  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:20:42.862777  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:42.862786  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:42.862791  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:42.866421  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:43.357381  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-cq2vx
	I0327 19:20:43.357401  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:43.357411  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:43.357414  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:43.360604  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:43.361898  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:20:43.361948  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:43.361958  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:43.361963  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:43.364620  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:20:43.857403  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-cq2vx
	I0327 19:20:43.857427  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:43.857437  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:43.857441  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:43.860523  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:43.861200  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:20:43.861210  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:43.861221  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:43.861225  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:43.863949  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:20:44.358039  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-cq2vx
	I0327 19:20:44.358063  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:44.358073  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:44.358078  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:44.361138  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:44.362296  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:20:44.362317  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:44.362327  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:44.362332  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:44.365278  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:20:44.366738  624194 pod_ready.go:102] pod "coredns-76f75df574-cq2vx" in "kube-system" namespace has status "Ready":"False"
	I0327 19:20:44.857723  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-cq2vx
	I0327 19:20:44.857746  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:44.857757  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:44.857761  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:44.860874  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:44.861686  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:20:44.861707  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:44.861717  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:44.861721  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:44.864430  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:20:45.357738  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-cq2vx
	I0327 19:20:45.357804  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:45.357814  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:45.357818  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:45.367423  624194 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0327 19:20:45.369014  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:20:45.369043  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:45.369060  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:45.369065  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:45.373281  624194 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 19:20:45.375407  624194 pod_ready.go:97] node "ha-738145" hosting pod "coredns-76f75df574-cq2vx" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-738145" has status "Ready":"Unknown"
	I0327 19:20:45.375441  624194 pod_ready.go:81] duration metric: took 9.518245003s for pod "coredns-76f75df574-cq2vx" in "kube-system" namespace to be "Ready" ...
	E0327 19:20:45.375453  624194 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-738145" hosting pod "coredns-76f75df574-cq2vx" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-738145" has status "Ready":"Unknown"
	I0327 19:20:45.375461  624194 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-knk2g" in "kube-system" namespace to be "Ready" ...
	I0327 19:20:45.375538  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-knk2g
	I0327 19:20:45.375549  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:45.375557  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:45.375561  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:45.381089  624194 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 19:20:45.382294  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:20:45.382319  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:45.382329  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:45.382333  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:45.391316  624194 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0327 19:20:45.392309  624194 pod_ready.go:97] node "ha-738145" hosting pod "coredns-76f75df574-knk2g" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-738145" has status "Ready":"Unknown"
	I0327 19:20:45.392339  624194 pod_ready.go:81] duration metric: took 16.864393ms for pod "coredns-76f75df574-knk2g" in "kube-system" namespace to be "Ready" ...
	E0327 19:20:45.392351  624194 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-738145" hosting pod "coredns-76f75df574-knk2g" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-738145" has status "Ready":"Unknown"
	I0327 19:20:45.392369  624194 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-738145" in "kube-system" namespace to be "Ready" ...
	I0327 19:20:45.392438  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-738145
	I0327 19:20:45.392450  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:45.392459  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:45.392465  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:45.401879  624194 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0327 19:20:45.403436  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:20:45.403462  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:45.403472  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:45.403477  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:45.408544  624194 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 19:20:45.409481  624194 pod_ready.go:97] node "ha-738145" hosting pod "etcd-ha-738145" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-738145" has status "Ready":"Unknown"
	I0327 19:20:45.409547  624194 pod_ready.go:81] duration metric: took 17.170089ms for pod "etcd-ha-738145" in "kube-system" namespace to be "Ready" ...
	E0327 19:20:45.409573  624194 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-738145" hosting pod "etcd-ha-738145" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-738145" has status "Ready":"Unknown"
	I0327 19:20:45.409594  624194 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-738145-m02" in "kube-system" namespace to be "Ready" ...
	I0327 19:20:45.409715  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-738145-m02
	I0327 19:20:45.409741  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:45.409764  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:45.409785  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:45.413409  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:45.414479  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m02
	I0327 19:20:45.414541  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:45.414565  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:45.414586  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:45.419827  624194 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 19:20:45.420430  624194 pod_ready.go:92] pod "etcd-ha-738145-m02" in "kube-system" namespace has status "Ready":"True"
	I0327 19:20:45.420488  624194 pod_ready.go:81] duration metric: took 10.852118ms for pod "etcd-ha-738145-m02" in "kube-system" namespace to be "Ready" ...
	I0327 19:20:45.420525  624194 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-738145" in "kube-system" namespace to be "Ready" ...
	I0327 19:20:45.420617  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-738145
	I0327 19:20:45.420643  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:45.420664  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:45.420684  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:45.425801  624194 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 19:20:45.427424  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:20:45.427489  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:45.427513  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:45.427533  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:45.437416  624194 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0327 19:20:45.439182  624194 pod_ready.go:97] node "ha-738145" hosting pod "kube-apiserver-ha-738145" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-738145" has status "Ready":"Unknown"
	I0327 19:20:45.439253  624194 pod_ready.go:81] duration metric: took 18.695456ms for pod "kube-apiserver-ha-738145" in "kube-system" namespace to be "Ready" ...
	E0327 19:20:45.439279  624194 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-738145" hosting pod "kube-apiserver-ha-738145" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-738145" has status "Ready":"Unknown"
	I0327 19:20:45.439316  624194 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-738145-m02" in "kube-system" namespace to be "Ready" ...
	I0327 19:20:45.558650  624194 request.go:629] Waited for 119.244459ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-738145-m02
	I0327 19:20:45.558756  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-738145-m02
	I0327 19:20:45.558769  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:45.558792  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:45.558810  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:45.561828  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:20:45.757848  624194 request.go:629] Waited for 195.304446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-738145-m02
	I0327 19:20:45.757941  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m02
	I0327 19:20:45.757992  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:45.758002  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:45.758009  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:45.761003  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:20:45.761554  624194 pod_ready.go:92] pod "kube-apiserver-ha-738145-m02" in "kube-system" namespace has status "Ready":"True"
	I0327 19:20:45.761613  624194 pod_ready.go:81] duration metric: took 322.269724ms for pod "kube-apiserver-ha-738145-m02" in "kube-system" namespace to be "Ready" ...
	I0327 19:20:45.761632  624194 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-738145" in "kube-system" namespace to be "Ready" ...
	I0327 19:20:45.958517  624194 request.go:629] Waited for 196.819276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-738145
	I0327 19:20:45.958586  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-738145
	I0327 19:20:45.958596  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:45.958602  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:45.958609  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:45.961674  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:46.157828  624194 request.go:629] Waited for 195.255371ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:20:46.157953  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:20:46.157968  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:46.157978  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:46.157983  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:46.161474  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:46.162348  624194 pod_ready.go:97] node "ha-738145" hosting pod "kube-controller-manager-ha-738145" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-738145" has status "Ready":"Unknown"
	I0327 19:20:46.162374  624194 pod_ready.go:81] duration metric: took 400.732332ms for pod "kube-controller-manager-ha-738145" in "kube-system" namespace to be "Ready" ...
	E0327 19:20:46.162385  624194 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-738145" hosting pod "kube-controller-manager-ha-738145" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-738145" has status "Ready":"Unknown"
	I0327 19:20:46.162393  624194 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-738145-m02" in "kube-system" namespace to be "Ready" ...
	I0327 19:20:46.358234  624194 request.go:629] Waited for 195.746773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-738145-m02
	I0327 19:20:46.358299  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-738145-m02
	I0327 19:20:46.358305  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:46.358314  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:46.358319  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:46.361468  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:46.558622  624194 request.go:629] Waited for 196.285478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-738145-m02
	I0327 19:20:46.558677  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m02
	I0327 19:20:46.558682  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:46.558691  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:46.558697  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:46.562762  624194 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 19:20:46.563356  624194 pod_ready.go:92] pod "kube-controller-manager-ha-738145-m02" in "kube-system" namespace has status "Ready":"True"
	I0327 19:20:46.563373  624194 pod_ready.go:81] duration metric: took 400.968006ms for pod "kube-controller-manager-ha-738145-m02" in "kube-system" namespace to be "Ready" ...
	I0327 19:20:46.563385  624194 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b8vjf" in "kube-system" namespace to be "Ready" ...
	I0327 19:20:46.758226  624194 request.go:629] Waited for 194.739042ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b8vjf
	I0327 19:20:46.758288  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b8vjf
	I0327 19:20:46.758295  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:46.758346  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:46.758356  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:46.761631  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:46.958465  624194 request.go:629] Waited for 195.911523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:20:46.958518  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:20:46.958527  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:46.958542  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:46.958549  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:46.961562  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:20:46.962392  624194 pod_ready.go:97] node "ha-738145" hosting pod "kube-proxy-b8vjf" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-738145" has status "Ready":"Unknown"
	I0327 19:20:46.962418  624194 pod_ready.go:81] duration metric: took 399.02656ms for pod "kube-proxy-b8vjf" in "kube-system" namespace to be "Ready" ...
	E0327 19:20:46.962451  624194 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-738145" hosting pod "kube-proxy-b8vjf" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-738145" has status "Ready":"Unknown"
	I0327 19:20:46.962466  624194 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fh7bn" in "kube-system" namespace to be "Ready" ...
	I0327 19:20:47.157807  624194 request.go:629] Waited for 195.250358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fh7bn
	I0327 19:20:47.157873  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fh7bn
	I0327 19:20:47.157883  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:47.157892  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:47.157915  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:47.160961  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:47.357995  624194 request.go:629] Waited for 196.282287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-738145-m04
	I0327 19:20:47.358054  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m04
	I0327 19:20:47.358062  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:47.358072  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:47.358078  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:47.361147  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:47.361707  624194 pod_ready.go:92] pod "kube-proxy-fh7bn" in "kube-system" namespace has status "Ready":"True"
	I0327 19:20:47.361727  624194 pod_ready.go:81] duration metric: took 399.252511ms for pod "kube-proxy-fh7bn" in "kube-system" namespace to be "Ready" ...
	I0327 19:20:47.361739  624194 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vgfbw" in "kube-system" namespace to be "Ready" ...
	I0327 19:20:47.558655  624194 request.go:629] Waited for 196.850661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vgfbw
	I0327 19:20:47.558779  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vgfbw
	I0327 19:20:47.558792  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:47.558802  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:47.558807  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:47.562081  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:47.758351  624194 request.go:629] Waited for 195.340301ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-738145-m02
	I0327 19:20:47.758432  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m02
	I0327 19:20:47.758441  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:47.758467  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:47.758479  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:47.761396  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:20:47.762030  624194 pod_ready.go:92] pod "kube-proxy-vgfbw" in "kube-system" namespace has status "Ready":"True"
	I0327 19:20:47.762054  624194 pod_ready.go:81] duration metric: took 400.307266ms for pod "kube-proxy-vgfbw" in "kube-system" namespace to be "Ready" ...
	I0327 19:20:47.762065  624194 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-738145" in "kube-system" namespace to be "Ready" ...
	I0327 19:20:47.958470  624194 request.go:629] Waited for 196.339418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-738145
	I0327 19:20:47.958583  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-738145
	I0327 19:20:47.958593  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:47.958608  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:47.958613  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:47.961683  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:48.158617  624194 request.go:629] Waited for 196.336776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:20:48.158708  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145
	I0327 19:20:48.158714  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:48.158729  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:48.158733  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:48.161729  624194 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 19:20:48.162409  624194 pod_ready.go:97] node "ha-738145" hosting pod "kube-scheduler-ha-738145" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-738145" has status "Ready":"Unknown"
	I0327 19:20:48.162434  624194 pod_ready.go:81] duration metric: took 400.361773ms for pod "kube-scheduler-ha-738145" in "kube-system" namespace to be "Ready" ...
	E0327 19:20:48.162445  624194 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-738145" hosting pod "kube-scheduler-ha-738145" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-738145" has status "Ready":"Unknown"
	I0327 19:20:48.162452  624194 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-738145-m02" in "kube-system" namespace to be "Ready" ...
	I0327 19:20:48.358210  624194 request.go:629] Waited for 195.672936ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-738145-m02
	I0327 19:20:48.358274  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-738145-m02
	I0327 19:20:48.358283  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:48.358291  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:48.358295  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:48.361334  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:48.558568  624194 request.go:629] Waited for 196.33333ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-738145-m02
	I0327 19:20:48.558656  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-738145-m02
	I0327 19:20:48.558665  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:48.558673  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:48.558681  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:48.561893  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:48.562464  624194 pod_ready.go:92] pod "kube-scheduler-ha-738145-m02" in "kube-system" namespace has status "Ready":"True"
	I0327 19:20:48.562478  624194 pod_ready.go:81] duration metric: took 400.017759ms for pod "kube-scheduler-ha-738145-m02" in "kube-system" namespace to be "Ready" ...
	I0327 19:20:48.562490  624194 pod_ready.go:38] duration metric: took 12.718210174s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 19:20:48.562504  624194 system_svc.go:44] waiting for kubelet service to be running ....
	I0327 19:20:48.562563  624194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 19:20:48.575016  624194 system_svc.go:56] duration metric: took 12.502948ms WaitForService to wait for kubelet
	I0327 19:20:48.575044  624194 kubeadm.go:576] duration metric: took 19.880065146s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 19:20:48.575065  624194 node_conditions.go:102] verifying NodePressure condition ...
	I0327 19:20:48.758432  624194 request.go:629] Waited for 183.299496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0327 19:20:48.758494  624194 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0327 19:20:48.758501  624194 round_trippers.go:469] Request Headers:
	I0327 19:20:48.758509  624194 round_trippers.go:473]     Accept: application/json, */*
	I0327 19:20:48.758514  624194 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0327 19:20:48.761842  624194 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 19:20:48.763330  624194 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0327 19:20:48.763357  624194 node_conditions.go:123] node cpu capacity is 2
	I0327 19:20:48.763367  624194 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0327 19:20:48.763372  624194 node_conditions.go:123] node cpu capacity is 2
	I0327 19:20:48.763377  624194 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0327 19:20:48.763383  624194 node_conditions.go:123] node cpu capacity is 2
	I0327 19:20:48.763387  624194 node_conditions.go:105] duration metric: took 188.317209ms to run NodePressure ...
	I0327 19:20:48.763398  624194 start.go:240] waiting for startup goroutines ...
	I0327 19:20:48.763423  624194 start.go:254] writing updated cluster config ...
	I0327 19:20:48.763735  624194 ssh_runner.go:195] Run: rm -f paused
	I0327 19:20:48.828856  624194 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0327 19:20:48.842982  624194 out.go:177] * Done! kubectl is now configured to use "ha-738145" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 27 19:20:15 ha-738145 crio[644]: time="2024-03-27 19:20:15.617948710Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Mar 27 19:20:15 ha-738145 crio[644]: time="2024-03-27 19:20:15.617982351Z" level=info msg="Updated default CNI network name to kindnet"
	Mar 27 19:20:15 ha-738145 crio[644]: time="2024-03-27 19:20:15.617997219Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Mar 27 19:20:15 ha-738145 crio[644]: time="2024-03-27 19:20:15.621186855Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Mar 27 19:20:15 ha-738145 crio[644]: time="2024-03-27 19:20:15.621220709Z" level=info msg="Updated default CNI network name to kindnet"
	Mar 27 19:20:15 ha-738145 crio[644]: time="2024-03-27 19:20:15.752915001Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2e777034-aecc-442f-a76e-0d98d063c2d3 name=/runtime.v1.ImageService/ImageStatus
	Mar 27 19:20:15 ha-738145 crio[644]: time="2024-03-27 19:20:15.753184399Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2 gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944],Size_:29037500,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2e777034-aecc-442f-a76e-0d98d063c2d3 name=/runtime.v1.ImageService/ImageStatus
	Mar 27 19:20:15 ha-738145 crio[644]: time="2024-03-27 19:20:15.753782459Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ff198d4d-22c9-4796-ba84-8648ee79d0dd name=/runtime.v1.ImageService/ImageStatus
	Mar 27 19:20:15 ha-738145 crio[644]: time="2024-03-27 19:20:15.753985297Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2 gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944],Size_:29037500,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ff198d4d-22c9-4796-ba84-8648ee79d0dd name=/runtime.v1.ImageService/ImageStatus
	Mar 27 19:20:15 ha-738145 crio[644]: time="2024-03-27 19:20:15.754910740Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=db5cfbe7-db63-478f-bed8-5251a1c87fe6 name=/runtime.v1.RuntimeService/CreateContainer
	Mar 27 19:20:15 ha-738145 crio[644]: time="2024-03-27 19:20:15.755010990Z" level=warning msg="Allowed annotations are specified for workload []"
	Mar 27 19:20:15 ha-738145 crio[644]: time="2024-03-27 19:20:15.769060966Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8ffc372013b88ade0bf1d3bfe0067adbddb80830b8639b7ed46cec1cdd0cc296/merged/etc/passwd: no such file or directory"
	Mar 27 19:20:15 ha-738145 crio[644]: time="2024-03-27 19:20:15.769116071Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8ffc372013b88ade0bf1d3bfe0067adbddb80830b8639b7ed46cec1cdd0cc296/merged/etc/group: no such file or directory"
	Mar 27 19:20:15 ha-738145 crio[644]: time="2024-03-27 19:20:15.815833819Z" level=info msg="Created container a890851f371a7ff4c94c39249d1814ced2ba1ccd8103ac2e7757b62fc9d06c79: kube-system/storage-provisioner/storage-provisioner" id=db5cfbe7-db63-478f-bed8-5251a1c87fe6 name=/runtime.v1.RuntimeService/CreateContainer
	Mar 27 19:20:15 ha-738145 crio[644]: time="2024-03-27 19:20:15.816385463Z" level=info msg="Starting container: a890851f371a7ff4c94c39249d1814ced2ba1ccd8103ac2e7757b62fc9d06c79" id=48fcda7e-cc95-46aa-87a5-7f23415180c1 name=/runtime.v1.RuntimeService/StartContainer
	Mar 27 19:20:15 ha-738145 crio[644]: time="2024-03-27 19:20:15.822713962Z" level=info msg="Started container" PID=1888 containerID=a890851f371a7ff4c94c39249d1814ced2ba1ccd8103ac2e7757b62fc9d06c79 description=kube-system/storage-provisioner/storage-provisioner id=48fcda7e-cc95-46aa-87a5-7f23415180c1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6a53c3c7bb127bfcd4601efea2f7401ed909ee3caa39b89c3b4c1d015775059d
	Mar 27 19:20:29 ha-738145 crio[644]: time="2024-03-27 19:20:29.518412691Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.29.3" id=b2613757-a101-4d50-9554-71ae09af1621 name=/runtime.v1.ImageService/ImageStatus
	Mar 27 19:20:29 ha-738145 crio[644]: time="2024-03-27 19:20:29.518650220Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:121d70d9a3805f44c7c587a60d9360495cf9d95129047f4818bb7110ec1ec195,RepoTags:[registry.k8s.io/kube-controller-manager:v1.29.3],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104 registry.k8s.io/kube-controller-manager@sha256:e89c6fb613c47831235c0758443a7a0b735ff97da7a41f9f820f3db035708c19],Size_:118747956,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=b2613757-a101-4d50-9554-71ae09af1621 name=/runtime.v1.ImageService/ImageStatus
	Mar 27 19:20:29 ha-738145 crio[644]: time="2024-03-27 19:20:29.519585953Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.29.3" id=6fcb792f-faba-4e45-b312-9b3eafca0048 name=/runtime.v1.ImageService/ImageStatus
	Mar 27 19:20:29 ha-738145 crio[644]: time="2024-03-27 19:20:29.519770362Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:121d70d9a3805f44c7c587a60d9360495cf9d95129047f4818bb7110ec1ec195,RepoTags:[registry.k8s.io/kube-controller-manager:v1.29.3],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104 registry.k8s.io/kube-controller-manager@sha256:e89c6fb613c47831235c0758443a7a0b735ff97da7a41f9f820f3db035708c19],Size_:118747956,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=6fcb792f-faba-4e45-b312-9b3eafca0048 name=/runtime.v1.ImageService/ImageStatus
	Mar 27 19:20:29 ha-738145 crio[644]: time="2024-03-27 19:20:29.520898758Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-738145/kube-controller-manager" id=73aaafbe-5544-470a-bf06-a1da6063c285 name=/runtime.v1.RuntimeService/CreateContainer
	Mar 27 19:20:29 ha-738145 crio[644]: time="2024-03-27 19:20:29.520998097Z" level=warning msg="Allowed annotations are specified for workload []"
	Mar 27 19:20:29 ha-738145 crio[644]: time="2024-03-27 19:20:29.597206515Z" level=info msg="Created container 4fdec43174f5df412cfe73ccf563d51d4b4614ed1d8a1d17b6ffe8c4e4fc743a: kube-system/kube-controller-manager-ha-738145/kube-controller-manager" id=73aaafbe-5544-470a-bf06-a1da6063c285 name=/runtime.v1.RuntimeService/CreateContainer
	Mar 27 19:20:29 ha-738145 crio[644]: time="2024-03-27 19:20:29.597817646Z" level=info msg="Starting container: 4fdec43174f5df412cfe73ccf563d51d4b4614ed1d8a1d17b6ffe8c4e4fc743a" id=6c5bca79-b25c-4bb9-bcd1-36812d6919c2 name=/runtime.v1.RuntimeService/StartContainer
	Mar 27 19:20:29 ha-738145 crio[644]: time="2024-03-27 19:20:29.606907542Z" level=info msg="Started container" PID=1928 containerID=4fdec43174f5df412cfe73ccf563d51d4b4614ed1d8a1d17b6ffe8c4e4fc743a description=kube-system/kube-controller-manager-ha-738145/kube-controller-manager id=6c5bca79-b25c-4bb9-bcd1-36812d6919c2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8780559903f8f73d9b55f253c6b515ae5a6e53cf52c277b495332bcbddae3170
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4fdec43174f5d       121d70d9a3805f44c7c587a60d9360495cf9d95129047f4818bb7110ec1ec195   21 seconds ago       Running             kube-controller-manager   6                   8780559903f8f       kube-controller-manager-ha-738145
	a890851f371a7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   35 seconds ago       Running             storage-provisioner       5                   6a53c3c7bb127       storage-provisioner
	95485706dc53b       adf781c1312f06f9d22bfc391f48c68e39ed1bfe4166c6ec09faea1a89f23d46   37 seconds ago       Running             kube-vip                  2                   c8dbfd89c7c66       kube-vip-ha-738145
	5dd41ff9d2b5a       2581114f5709d3459ca39f243fd21fde75f2f60d205ffdcd57b4207c33980794   41 seconds ago       Running             kube-apiserver            3                   edc01112d2893       kube-apiserver-ha-738145
	9711a1dd7d4c3       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93   About a minute ago   Running             coredns                   2                   a2e6188add742       coredns-76f75df574-cq2vx
	81e01b7f44fc8       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93   About a minute ago   Running             coredns                   2                   60e157db54bf4       coredns-76f75df574-knk2g
	b1ea4e338d175       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   About a minute ago   Exited              storage-provisioner       4                   6a53c3c7bb127       storage-provisioner
	95e5cdb906ce6       0e9b4a0d1e86d942f5ed93eaf751771e7602104cac5e15256c36967770ad2775   About a minute ago   Running             kube-proxy                2                   c915deec1e209       kube-proxy-b8vjf
	8802433cdaaac       4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d   About a minute ago   Running             kindnet-cni               2                   51b17e981f5e1       kindnet-n7v2f
	6d3e2579ae17a       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   About a minute ago   Running             busybox                   2                   ed6ac1fc172b6       busybox-7fdf7869d9-hjdcl
	a4eefcde3ca3f       121d70d9a3805f44c7c587a60d9360495cf9d95129047f4818bb7110ec1ec195   About a minute ago   Exited              kube-controller-manager   5                   8780559903f8f       kube-controller-manager-ha-738145
	891449c4ece5d       2581114f5709d3459ca39f243fd21fde75f2f60d205ffdcd57b4207c33980794   About a minute ago   Exited              kube-apiserver            2                   edc01112d2893       kube-apiserver-ha-738145
	ae3d27785f02e       adf781c1312f06f9d22bfc391f48c68e39ed1bfe4166c6ec09faea1a89f23d46   About a minute ago   Exited              kube-vip                  1                   c8dbfd89c7c66       kube-vip-ha-738145
	21cc6ef76e003       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd   About a minute ago   Running             etcd                      2                   94d98a3c52de9       etcd-ha-738145
	022cf92e59765       4b51f9f6bc9b9a68473278361df0e8985109b56c7b649532c6bffcab2a8c65fb   About a minute ago   Running             kube-scheduler            2                   2ed3e7c18a2f2       kube-scheduler-ha-738145
	
	
	==> coredns [81e01b7f44fc816cf63a19933f523867b31c0590680b23d32b4408210fe7f97e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:33553 - 2250 "HINFO IN 6822799668115515664.4300266844085823026. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016735271s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1416426639]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (27-Mar-2024 19:19:45.576) (total time: 30001ms):
	Trace[1416426639]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (19:20:15.577)
	Trace[1416426639]: [30.001552886s] [30.001552886s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1935393436]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (27-Mar-2024 19:19:45.576) (total time: 30000ms):
	Trace[1935393436]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:20:15.577)
	Trace[1935393436]: [30.000971039s] [30.000971039s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1622933001]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (27-Mar-2024 19:19:45.576) (total time: 30001ms):
	Trace[1622933001]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:20:15.577)
	Trace[1622933001]: [30.001116228s] [30.001116228s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [9711a1dd7d4c30e2485631924be31a8d48089e173e3b4a8038b3e735ab9d67f2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:39016 - 33738 "HINFO IN 5624212179698095465.4753853034995564625. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013350985s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[29804523]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (27-Mar-2024 19:19:45.482) (total time: 30004ms):
	Trace[29804523]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (19:20:15.484)
	Trace[29804523]: [30.00402532s] [30.00402532s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1148290863]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (27-Mar-2024 19:19:45.482) (total time: 30005ms):
	Trace[1148290863]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (19:20:15.484)
	Trace[1148290863]: [30.005405317s] [30.005405317s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1140397847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (27-Mar-2024 19:19:45.483) (total time: 30005ms):
	Trace[1140397847]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (19:20:15.485)
	Trace[1140397847]: [30.005584426s] [30.005584426s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-738145
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-738145
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=475b39f6a1dc94a0c7060d2eec10d9b995edcd28
	                    minikube.k8s.io/name=ha-738145
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_27T19_10_54_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 19:10:51 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-738145
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 27 Mar 2024 19:20:04 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 27 Mar 2024 19:19:34 +0000   Wed, 27 Mar 2024 19:20:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 27 Mar 2024 19:19:34 +0000   Wed, 27 Mar 2024 19:20:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 27 Mar 2024 19:19:34 +0000   Wed, 27 Mar 2024 19:20:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 27 Mar 2024 19:19:34 +0000   Wed, 27 Mar 2024 19:20:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-738145
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 60a6932056434295bf7588371ad413e3
	  System UUID:                ea89cc7f-49cf-4de1-85e7-a227bbb1e4dc
	  Boot ID:                    561aadd0-a15d-4e78-9187-a38c38772b44
	  Kernel Version:             5.15.0-1056-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-hjdcl             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m46s
	  kube-system                 coredns-76f75df574-cq2vx             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     9m44s
	  kube-system                 coredns-76f75df574-knk2g             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     9m44s
	  kube-system                 etcd-ha-738145                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         9m57s
	  kube-system                 kindnet-n7v2f                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9m45s
	  kube-system                 kube-apiserver-ha-738145             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m57s
	  kube-system                 kube-controller-manager-ha-738145    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m57s
	  kube-system                 kube-proxy-b8vjf                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m45s
	  kube-system                 kube-scheduler-ha-738145             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m57s
	  kube-system                 kube-vip-ha-738145                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             290Mi (3%!)(MISSING)  390Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 65s                    kube-proxy       
	  Normal  Starting                 3m58s                  kube-proxy       
	  Normal  Starting                 9m43s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    9m57s                  kubelet          Node ha-738145 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m57s                  kubelet          Node ha-738145 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m57s                  kubelet          Node ha-738145 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           9m45s                  node-controller  Node ha-738145 event: Registered Node ha-738145 in Controller
	  Normal  NodeReady                9m14s                  kubelet          Node ha-738145 status is now: NodeReady
	  Normal  RegisteredNode           9m4s                   node-controller  Node ha-738145 event: Registered Node ha-738145 in Controller
	  Normal  RegisteredNode           8m6s                   node-controller  Node ha-738145 event: Registered Node ha-738145 in Controller
	  Normal  RegisteredNode           5m31s                  node-controller  Node ha-738145 event: Registered Node ha-738145 in Controller
	  Normal  NodeHasNoDiskPressure    4m58s (x8 over 4m58s)  kubelet          Node ha-738145 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m58s (x8 over 4m58s)  kubelet          Node ha-738145 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m58s (x8 over 4m58s)  kubelet          Node ha-738145 status is now: NodeHasSufficientMemory
	  Normal  Starting                 4m58s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-738145 event: Registered Node ha-738145 in Controller
	  Normal  RegisteredNode           3m38s                  node-controller  Node ha-738145 event: Registered Node ha-738145 in Controller
	  Normal  RegisteredNode           3m27s                  node-controller  Node ha-738145 event: Registered Node ha-738145 in Controller
	  Normal  Starting                 115s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  115s (x8 over 115s)    kubelet          Node ha-738145 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s (x8 over 115s)    kubelet          Node ha-738145 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s (x8 over 115s)    kubelet          Node ha-738145 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           72s                    node-controller  Node ha-738145 event: Registered Node ha-738145 in Controller
	  Normal  RegisteredNode           9s                     node-controller  Node ha-738145 event: Registered Node ha-738145 in Controller
	  Normal  NodeNotReady             6s                     node-controller  Node ha-738145 status is now: NodeNotReady
	
	
	Name:               ha-738145-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-738145-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=475b39f6a1dc94a0c7060d2eec10d9b995edcd28
	                    minikube.k8s.io/name=ha-738145
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_27T19_11_32_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 19:11:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-738145-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 27 Mar 2024 19:20:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 27 Mar 2024 19:19:26 +0000   Wed, 27 Mar 2024 19:11:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 27 Mar 2024 19:19:26 +0000   Wed, 27 Mar 2024 19:11:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 27 Mar 2024 19:19:26 +0000   Wed, 27 Mar 2024 19:11:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 27 Mar 2024 19:19:26 +0000   Wed, 27 Mar 2024 19:12:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-738145-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 308b586d9c9048489a0eda99158a326c
	  System UUID:                c13483d9-1502-452a-8b60-9d8bc6337f81
	  Boot ID:                    561aadd0-a15d-4e78-9187-a38c38772b44
	  Kernel Version:             5.15.0-1056-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-7sgbt                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m46s
	  kube-system                 etcd-ha-738145-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         9m23s
	  kube-system                 kindnet-wnwtz                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9m24s
	  kube-system                 kube-apiserver-ha-738145-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-controller-manager-ha-738145-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-proxy-vgfbw                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m24s
	  kube-system                 kube-scheduler-ha-738145-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-vip-ha-738145-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (1%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m46s                  kube-proxy       
	  Normal  Starting                 65s                    kube-proxy       
	  Normal  Starting                 9m18s                  kube-proxy       
	  Normal  Starting                 4m19s                  kube-proxy       
	  Normal  NodeHasSufficientPID     9m24s (x8 over 9m24s)  kubelet          Node ha-738145-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    9m24s (x8 over 9m24s)  kubelet          Node ha-738145-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  9m24s (x8 over 9m24s)  kubelet          Node ha-738145-m02 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           9m20s                  node-controller  Node ha-738145-m02 event: Registered Node ha-738145-m02 in Controller
	  Normal  RegisteredNode           9m4s                   node-controller  Node ha-738145-m02 event: Registered Node ha-738145-m02 in Controller
	  Normal  RegisteredNode           8m6s                   node-controller  Node ha-738145-m02 event: Registered Node ha-738145-m02 in Controller
	  Normal  NodeHasSufficientPID     6m11s (x8 over 6m11s)  kubelet          Node ha-738145-m02 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m11s (x8 over 6m11s)  kubelet          Node ha-738145-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m11s (x8 over 6m11s)  kubelet          Node ha-738145-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           5m31s                  node-controller  Node ha-738145-m02 event: Registered Node ha-738145-m02 in Controller
	  Normal  Starting                 4m57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     4m56s (x8 over 4m56s)  kubelet          Node ha-738145-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m56s (x8 over 4m56s)  kubelet          Node ha-738145-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m56s (x8 over 4m56s)  kubelet          Node ha-738145-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-738145-m02 event: Registered Node ha-738145-m02 in Controller
	  Normal  RegisteredNode           3m38s                  node-controller  Node ha-738145-m02 event: Registered Node ha-738145-m02 in Controller
	  Normal  RegisteredNode           3m27s                  node-controller  Node ha-738145-m02 event: Registered Node ha-738145-m02 in Controller
	  Normal  Starting                 114s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  113s (x8 over 113s)    kubelet          Node ha-738145-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s (x8 over 113s)    kubelet          Node ha-738145-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s (x8 over 113s)    kubelet          Node ha-738145-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           72s                    node-controller  Node ha-738145-m02 event: Registered Node ha-738145-m02 in Controller
	  Normal  RegisteredNode           9s                     node-controller  Node ha-738145-m02 event: Registered Node ha-738145-m02 in Controller
	
	
	Name:               ha-738145-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-738145-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=475b39f6a1dc94a0c7060d2eec10d9b995edcd28
	                    minikube.k8s.io/name=ha-738145
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_27T19_13_27_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 19:13:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-738145-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 27 Mar 2024 19:20:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 27 Mar 2024 19:20:35 +0000   Wed, 27 Mar 2024 19:20:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 27 Mar 2024 19:20:35 +0000   Wed, 27 Mar 2024 19:20:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 27 Mar 2024 19:20:35 +0000   Wed, 27 Mar 2024 19:20:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 27 Mar 2024 19:20:35 +0000   Wed, 27 Mar 2024 19:20:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-738145-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 7bd49619f1894583950afc18f3985ef5
	  System UUID:                882aaa82-2eff-4793-b9ee-987ec8b5bd16
	  Boot ID:                    561aadd0-a15d-4e78-9187-a38c38772b44
	  Kernel Version:             5.15.0-1056-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-gf4xb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m50s
	  kube-system                 kindnet-66mtx               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      7m25s
	  kube-system                 kube-proxy-fh7bn            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                   From             Message
	  ----    ------                   ----                  ----             -------
	  Normal  Starting                 7m23s                 kube-proxy       
	  Normal  Starting                 8s                    kube-proxy       
	  Normal  Starting                 2m53s                 kube-proxy       
	  Normal  NodeHasNoDiskPressure    7m25s                 kubelet          Node ha-738145-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  7m25s                 kubelet          Node ha-738145-m04 status is now: NodeHasSufficientMemory
	  Normal  Starting                 7m25s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     7m25s                 kubelet          Node ha-738145-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m24s                 node-controller  Node ha-738145-m04 event: Registered Node ha-738145-m04 in Controller
	  Normal  RegisteredNode           7m21s                 node-controller  Node ha-738145-m04 event: Registered Node ha-738145-m04 in Controller
	  Normal  RegisteredNode           7m20s                 node-controller  Node ha-738145-m04 event: Registered Node ha-738145-m04 in Controller
	  Normal  NodeReady                6m53s                 kubelet          Node ha-738145-m04 status is now: NodeReady
	  Normal  RegisteredNode           5m31s                 node-controller  Node ha-738145-m04 event: Registered Node ha-738145-m04 in Controller
	  Normal  RegisteredNode           4m12s                 node-controller  Node ha-738145-m04 event: Registered Node ha-738145-m04 in Controller
	  Normal  RegisteredNode           3m38s                 node-controller  Node ha-738145-m04 event: Registered Node ha-738145-m04 in Controller
	  Normal  NodeNotReady             3m32s                 node-controller  Node ha-738145-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           3m27s                 node-controller  Node ha-738145-m04 event: Registered Node ha-738145-m04 in Controller
	  Normal  Starting                 3m15s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m3s (x8 over 3m15s)  kubelet          Node ha-738145-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m3s (x8 over 3m15s)  kubelet          Node ha-738145-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m3s (x8 over 3m15s)  kubelet          Node ha-738145-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           72s                   node-controller  Node ha-738145-m04 event: Registered Node ha-738145-m04 in Controller
	  Normal  NodeNotReady             31s                   node-controller  Node ha-738145-m04 status is now: NodeNotReady
	  Normal  Starting                 29s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16s (x8 over 29s)     kubelet          Node ha-738145-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16s (x8 over 29s)     kubelet          Node ha-738145-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16s (x8 over 29s)     kubelet          Node ha-738145-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9s                    node-controller  Node ha-738145-m04 event: Registered Node ha-738145-m04 in Controller
	
	
	==> dmesg <==
	[  +0.001058] FS-Cache: O-key=[8] 'd03c5c0100000000'
	[  +0.000705] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001012] FS-Cache: N-cookie d=00000000b2ab35c3{9p.inode} n=0000000039bee2d0
	[  +0.001067] FS-Cache: N-key=[8] 'd03c5c0100000000'
	[  +0.002397] FS-Cache: Duplicate cookie detected
	[  +0.000750] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.000971] FS-Cache: O-cookie d=00000000b2ab35c3{9p.inode} n=00000000fdf1f645
	[  +0.001035] FS-Cache: O-key=[8] 'd03c5c0100000000'
	[  +0.000710] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000975] FS-Cache: N-cookie d=00000000b2ab35c3{9p.inode} n=00000000f697cc87
	[  +0.001044] FS-Cache: N-key=[8] 'd03c5c0100000000'
	[  +2.383770] FS-Cache: Duplicate cookie detected
	[  +0.000751] FS-Cache: O-cookie c=00000029 [p=00000027 fl=226 nc=0 na=1]
	[  +0.001026] FS-Cache: O-cookie d=00000000b2ab35c3{9p.inode} n=00000000cde025e8
	[  +0.001111] FS-Cache: O-key=[8] 'cf3c5c0100000000'
	[  +0.000734] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=00000000b2ab35c3{9p.inode} n=00000000b46e9eef
	[  +0.001060] FS-Cache: N-key=[8] 'cf3c5c0100000000'
	[  +0.375682] FS-Cache: Duplicate cookie detected
	[  +0.000813] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.001158] FS-Cache: O-cookie d=00000000b2ab35c3{9p.inode} n=000000000a94a302
	[  +0.001200] FS-Cache: O-key=[8] 'd53c5c0100000000'
	[  +0.000753] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001032] FS-Cache: N-cookie d=00000000b2ab35c3{9p.inode} n=0000000039bee2d0
	[  +0.001193] FS-Cache: N-key=[8] 'd53c5c0100000000'
	
	
	==> etcd [21cc6ef76e00304b4f53350c4afc7d7280a941e393e86d8b0d7b84ff36a9a392] <==
	{"level":"warn","ts":"2024-03-27T19:19:25.326013Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-27T19:19:21.753691Z","time spent":"3.572317739s","remote":"127.0.0.1:49636","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":29,"request content":"key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" limit:10000 "}
	{"level":"warn","ts":"2024-03-27T19:19:25.326026Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-27T19:19:21.753661Z","time spent":"3.572360397s","remote":"127.0.0.1:49600","response type":"/etcdserverpb.KV/Range","request count":0,"request size":81,"response count":4,"response size":9325,"request content":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" limit:10000 "}
	{"level":"warn","ts":"2024-03-27T19:19:25.32604Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-27T19:19:21.745505Z","time spent":"3.580531341s","remote":"127.0.0.1:49646","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":29,"request content":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" limit:10000 "}
	{"level":"warn","ts":"2024-03-27T19:19:25.326055Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-27T19:19:21.744228Z","time spent":"3.581820803s","remote":"127.0.0.1:49690","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":66,"response size":59468,"request content":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" limit:10000 "}
	{"level":"warn","ts":"2024-03-27T19:19:25.326068Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-27T19:19:21.749194Z","time spent":"3.576869772s","remote":"127.0.0.1:49582","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":0,"response size":29,"request content":"key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:10000 "}
	{"level":"warn","ts":"2024-03-27T19:19:25.32608Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-27T19:19:21.751291Z","time spent":"3.574784705s","remote":"127.0.0.1:49564","response type":"/etcdserverpb.KV/Range","request count":0,"request size":77,"response count":0,"response size":29,"request content":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" limit:10000 "}
	{"level":"warn","ts":"2024-03-27T19:19:25.326093Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-27T19:19:21.749083Z","time spent":"3.577006256s","remote":"127.0.0.1:49514","response type":"/etcdserverpb.KV/Range","request count":0,"request size":73,"response count":0,"response size":29,"request content":"key:\"/registry/persistentvolumeclaims/\" range_end:\"/registry/persistentvolumeclaims0\" limit:10000 "}
	{"level":"warn","ts":"2024-03-27T19:19:25.326108Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-27T19:19:21.749027Z","time spent":"3.577074949s","remote":"127.0.0.1:49596","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":0,"response size":29,"request content":"key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" limit:10000 "}
	{"level":"warn","ts":"2024-03-27T19:19:25.326122Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-27T19:19:21.745597Z","time spent":"3.58052056s","remote":"127.0.0.1:49660","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":0,"response size":29,"request content":"key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" limit:10000 "}
	{"level":"warn","ts":"2024-03-27T19:19:25.326135Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-27T19:19:21.745479Z","time spent":"3.580651389s","remote":"127.0.0.1:49640","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":29,"request content":"key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" limit:10000 "}
	{"level":"warn","ts":"2024-03-27T19:19:25.326147Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-27T19:19:21.678423Z","time spent":"3.647720402s","remote":"127.0.0.1:49838","response type":"/etcdserverpb.KV/Range","request count":0,"request size":91,"response count":0,"response size":29,"request content":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" limit:500 "}
	{"level":"warn","ts":"2024-03-27T19:19:25.326166Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-27T19:19:21.744339Z","time spent":"3.581820484s","remote":"127.0.0.1:49676","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":12,"response size":8658,"request content":"key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" limit:10000 "}
	{"level":"warn","ts":"2024-03-27T19:19:25.326178Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-27T19:19:21.639468Z","time spent":"3.686706033s","remote":"127.0.0.1:49716","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":1016,"request content":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" limit:500 "}
	{"level":"warn","ts":"2024-03-27T19:19:25.326192Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-27T19:19:21.744228Z","time spent":"3.581959707s","remote":"127.0.0.1:49668","response type":"/etcdserverpb.KV/Range","request count":0,"request size":39,"response count":12,"response size":7116,"request content":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" limit:10000 "}
	{"level":"warn","ts":"2024-03-27T19:19:25.326206Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-27T19:19:21.676169Z","time spent":"3.650032176s","remote":"127.0.0.1:49702","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":2,"response size":936,"request content":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" limit:500 "}
	{"level":"warn","ts":"2024-03-27T19:19:25.32622Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-27T19:19:21.67178Z","time spent":"3.654435451s","remote":"127.0.0.1:49784","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":8,"response size":5483,"request content":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" limit:500 "}
	{"level":"warn","ts":"2024-03-27T19:19:25.326234Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-27T19:19:21.651277Z","time spent":"3.674952438s","remote":"127.0.0.1:49870","response type":"/etcdserverpb.KV/Range","request count":0,"request size":97,"response count":21,"response size":20214,"request content":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" limit:500 "}
	{"level":"warn","ts":"2024-03-27T19:19:25.32625Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-27T19:19:21.022374Z","time spent":"4.303869359s","remote":"127.0.0.1:49690","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":66,"response size":59468,"request content":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" "}
	{"level":"warn","ts":"2024-03-27T19:19:25.328529Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-27T19:19:21.630693Z","time spent":"3.697818909s","remote":"127.0.0.1:49466","response type":"/etcdserverpb.KV/Range","request count":0,"request size":73,"response count":7,"response size":10767,"request content":"key:\"/registry/configmaps/kube-system/\" range_end:\"/registry/configmaps/kube-system0\" limit:500 "}
	{"level":"warn","ts":"2024-03-27T19:19:25.330863Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-27T19:19:21.790019Z","time spent":"3.540827001s","remote":"127.0.0.1:49538","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":29,"response size":150115,"request content":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" limit:10000 "}
	{"level":"warn","ts":"2024-03-27T19:19:25.32893Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-27T19:19:21.02235Z","time spent":"4.3065687s","remote":"127.0.0.1:49702","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":466,"request content":"key:\"/registry/priorityclasses/system-node-critical\" "}
	{"level":"warn","ts":"2024-03-27T19:19:25.328954Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-27T19:19:21.603755Z","time spent":"3.725193083s","remote":"127.0.0.1:49508","response type":"/etcdserverpb.KV/Range","request count":0,"request size":63,"response count":0,"response size":29,"request content":"key:\"/registry/persistentvolumes/\" range_end:\"/registry/persistentvolumes0\" limit:500 "}
	{"level":"warn","ts":"2024-03-27T19:19:25.32897Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-27T19:19:20.958492Z","time spent":"4.370470984s","remote":"127.0.0.1:49610","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":712,"request content":"key:\"/registry/leases/kube-system/apiserver-i5jq2c7vvlxkef75img7hburxa\" "}
	{"level":"info","ts":"2024-03-27T19:19:44.966969Z","caller":"traceutil/trace.go:171","msg":"trace[1360284766] transaction","detail":"{read_only:false; response_revision:2697; number_of_response:1; }","duration":"100.487613ms","start":"2024-03-27T19:19:44.866458Z","end":"2024-03-27T19:19:44.966945Z","steps":["trace[1360284766] 'process raft request'  (duration: 29.197178ms)","trace[1360284766] 'store kv pair into bolt db' {req_type:put; key:/registry/pods/kube-system/kube-scheduler-ha-738145; req_size:4262; } (duration: 63.614521ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-27T19:19:45.006164Z","caller":"traceutil/trace.go:171","msg":"trace[2101122783] transaction","detail":"{read_only:false; response_revision:2698; number_of_response:1; }","duration":"123.525365ms","start":"2024-03-27T19:19:44.882572Z","end":"2024-03-27T19:19:45.006097Z","steps":["trace[2101122783] 'process raft request'  (duration: 103.631612ms)","trace[2101122783] 'compare'  (duration: 19.237436ms)"],"step_count":2}
	
	
	==> kernel <==
	 19:20:51 up  3:03,  0 users,  load average: 1.41, 2.48, 2.52
	Linux ha-738145 5.15.0-1056-aws #61~20.04.1-Ubuntu SMP Wed Mar 13 17:45:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [8802433cdaaac327334b65dc3cfaf5cd73499700dae34c1589e1d6158484fcae] <==
	I0327 19:20:15.607010       1 main.go:227] handling current node
	I0327 19:20:15.610620       1 main.go:223] Handling node with IPs: map[192.168.49.3:{}]
	I0327 19:20:15.610648       1 main.go:250] Node ha-738145-m02 has CIDR [10.244.1.0/24] 
	I0327 19:20:15.610782       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.49.3 Flags: [] Table: 0} 
	I0327 19:20:15.610869       1 main.go:223] Handling node with IPs: map[192.168.49.5:{}]
	I0327 19:20:15.610880       1 main.go:250] Node ha-738145-m04 has CIDR [10.244.3.0/24] 
	I0327 19:20:15.610917       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.49.5 Flags: [] Table: 0} 
	I0327 19:20:25.616321       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 19:20:25.616688       1 main.go:227] handling current node
	I0327 19:20:25.616733       1 main.go:223] Handling node with IPs: map[192.168.49.3:{}]
	I0327 19:20:25.616770       1 main.go:250] Node ha-738145-m02 has CIDR [10.244.1.0/24] 
	I0327 19:20:25.616897       1 main.go:223] Handling node with IPs: map[192.168.49.5:{}]
	I0327 19:20:25.616935       1 main.go:250] Node ha-738145-m04 has CIDR [10.244.3.0/24] 
	I0327 19:20:35.637842       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 19:20:35.637871       1 main.go:227] handling current node
	I0327 19:20:35.637881       1 main.go:223] Handling node with IPs: map[192.168.49.3:{}]
	I0327 19:20:35.637887       1 main.go:250] Node ha-738145-m02 has CIDR [10.244.1.0/24] 
	I0327 19:20:35.638014       1 main.go:223] Handling node with IPs: map[192.168.49.5:{}]
	I0327 19:20:35.638029       1 main.go:250] Node ha-738145-m04 has CIDR [10.244.3.0/24] 
	I0327 19:20:45.652753       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0327 19:20:45.652786       1 main.go:227] handling current node
	I0327 19:20:45.652798       1 main.go:223] Handling node with IPs: map[192.168.49.3:{}]
	I0327 19:20:45.652804       1 main.go:250] Node ha-738145-m02 has CIDR [10.244.1.0/24] 
	I0327 19:20:45.653000       1 main.go:223] Handling node with IPs: map[192.168.49.5:{}]
	I0327 19:20:45.653022       1 main.go:250] Node ha-738145-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [5dd41ff9d2b5a54d813b9a4ca93d56c10c74384b1193ec64aeec9eb946ea4c0d] <==
	I0327 19:20:12.508231       1 naming_controller.go:291] Starting NamingConditionController
	I0327 19:20:12.508274       1 establishing_controller.go:76] Starting EstablishingController
	I0327 19:20:12.508314       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0327 19:20:12.508353       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0327 19:20:12.508395       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0327 19:20:12.530029       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0327 19:20:12.824212       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0327 19:20:12.827937       1 aggregator.go:165] initial CRD sync complete...
	I0327 19:20:12.829593       1 autoregister_controller.go:141] Starting autoregister controller
	I0327 19:20:12.829641       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0327 19:20:12.903337       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0327 19:20:12.906067       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0327 19:20:12.906878       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0327 19:20:12.913163       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0327 19:20:12.915152       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0327 19:20:12.915172       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0327 19:20:12.933988       1 cache.go:39] Caches are synced for autoregister controller
	I0327 19:20:12.975066       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0327 19:20:12.975232       1 controller.go:116] Starting legacy_token_tracking_controller
	I0327 19:20:12.975294       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0327 19:20:13.076104       1 shared_informer.go:318] Caches are synced for configmaps
	I0327 19:20:13.515247       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0327 19:20:13.909628       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I0327 19:20:13.911249       1 controller.go:624] quota admission added evaluator for: endpoints
	I0327 19:20:13.919922       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [891449c4ece5df001ab9dee515abc84aada2977bd9854926b984db464ae2cbb4] <==
	Trace[1196506362]: [3.573541437s] [3.573541437s] END
	I0327 19:19:25.379981       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0327 19:19:25.383917       1 trace.go:236] Trace[987205382]: "List" accept:application/vnd.kubernetes.protobuf, */*,audit-id:a5794cde-2255-4714-b47e-346904d0e453,client:::1,api-group:,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:pods,scope:cluster,url:/api/v1/pods,user-agent:kube-apiserver/v1.29.3 (linux/arm64) kubernetes/6813625,verb:LIST (27-Mar-2024 19:19:22.332) (total time: 3051ms):
	Trace[987205382]: ["List(recursive=true) etcd3" audit-id:a5794cde-2255-4714-b47e-346904d0e453,key:/pods,resourceVersion:0,resourceVersionMatch:,limit:500,continue: 3051ms (19:19:22.332)]
	Trace[987205382]: [3.051323547s] [3.051323547s] END
	I0327 19:19:25.384775       1 trace.go:236] Trace[500508347]: "List(recursive=true) etcd3" audit-id:,key:/pods,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (27-Mar-2024 19:19:21.789) (total time: 3595ms):
	Trace[500508347]: [3.595699884s] [3.595699884s] END
	I0327 19:19:25.420672       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0327 19:19:25.426241       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0327 19:19:25.426319       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0327 19:19:25.426327       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0327 19:19:25.426462       1 shared_informer.go:318] Caches are synced for configmaps
	I0327 19:19:25.426534       1 aggregator.go:165] initial CRD sync complete...
	I0327 19:19:25.426548       1 autoregister_controller.go:141] Starting autoregister controller
	I0327 19:19:25.426554       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0327 19:19:25.426560       1 cache.go:39] Caches are synced for autoregister controller
	I0327 19:19:25.428287       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0327 19:19:25.455206       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0327 19:19:25.456835       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	W0327 19:19:25.471058       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I0327 19:19:25.477359       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0327 19:19:25.479324       1 controller.go:624] quota admission added evaluator for: endpoints
	I0327 19:19:25.502062       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0327 19:19:25.513763       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	F0327 19:20:09.021037       1 hooks.go:203] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-controller-manager [4fdec43174f5df412cfe73ccf563d51d4b4614ed1d8a1d17b6ffe8c4e4fc743a] <==
	I0327 19:20:42.128588       1 shared_informer.go:318] Caches are synced for persistent volume
	I0327 19:20:42.128750       1 shared_informer.go:318] Caches are synced for stateful set
	I0327 19:20:42.141342       1 shared_informer.go:318] Caches are synced for disruption
	I0327 19:20:42.145841       1 shared_informer.go:318] Caches are synced for taint
	I0327 19:20:42.146741       1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone=""
	I0327 19:20:42.146854       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-738145"
	I0327 19:20:42.146922       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-738145-m02"
	I0327 19:20:42.146980       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-738145-m04"
	I0327 19:20:42.148377       1 event.go:376] "Event occurred" object="ha-738145" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-738145 event: Registered Node ha-738145 in Controller"
	I0327 19:20:42.148402       1 event.go:376] "Event occurred" object="ha-738145-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-738145-m02 event: Registered Node ha-738145-m02 in Controller"
	I0327 19:20:42.148412       1 event.go:376] "Event occurred" object="ha-738145-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-738145-m04 event: Registered Node ha-738145-m04 in Controller"
	I0327 19:20:42.148468       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0327 19:20:42.220022       1 shared_informer.go:318] Caches are synced for endpoint
	I0327 19:20:42.220691       1 shared_informer.go:318] Caches are synced for resource quota
	I0327 19:20:42.221166       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0327 19:20:42.230443       1 shared_informer.go:318] Caches are synced for resource quota
	I0327 19:20:42.544319       1 shared_informer.go:318] Caches are synced for garbage collector
	I0327 19:20:42.568254       1 shared_informer.go:318] Caches are synced for garbage collector
	I0327 19:20:42.568288       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0327 19:20:42.856094       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="52.381µs"
	I0327 19:20:44.021290       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="81.23262ms"
	I0327 19:20:44.021466       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="140.783µs"
	I0327 19:20:45.150139       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-738145-m04"
	I0327 19:20:45.320679       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="11.856674ms"
	I0327 19:20:45.322031       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="78.777µs"
	
	
	==> kube-controller-manager [a4eefcde3ca3f80823dc6cfdef63472f136004643edc58044d97e854de4286a7] <==
	I0327 19:19:46.679060       1 serving.go:380] Generated self-signed cert in-memory
	I0327 19:19:47.473223       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0327 19:19:47.473255       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0327 19:19:47.474794       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0327 19:19:47.474975       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0327 19:19:47.475295       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0327 19:19:47.475391       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0327 19:19:57.496474       1 controllermanager.go:232] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/start-system-namespaces-contro
ller ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-status-available-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [95e5cdb906ce69b3fdff26fd92f35addfec88b2ce32c5d68efc061560082c7d4] <==
	I0327 19:19:45.615472       1 server_others.go:72] "Using iptables proxy"
	I0327 19:19:45.634945       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0327 19:19:45.809437       1 server.go:652] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0327 19:19:45.809472       1 server_others.go:168] "Using iptables Proxier"
	I0327 19:19:45.827896       1 server_others.go:512] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0327 19:19:45.827922       1 server_others.go:529] "Defaulting to no-op detect-local"
	I0327 19:19:45.827955       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0327 19:19:45.828179       1 server.go:865] "Version info" version="v1.29.3"
	I0327 19:19:45.828198       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0327 19:19:45.841558       1 config.go:188] "Starting service config controller"
	I0327 19:19:45.841593       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0327 19:19:45.841614       1 config.go:97] "Starting endpoint slice config controller"
	I0327 19:19:45.841619       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0327 19:19:45.850225       1 config.go:315] "Starting node config controller"
	I0327 19:19:45.850255       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0327 19:19:45.941729       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0327 19:19:45.941796       1 shared_informer.go:318] Caches are synced for service config
	I0327 19:19:45.950460       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [022cf92e59765ffc0c8b98d770a40f46a7acfcbebc3c93e68a2914d40177bd3b] <==
	W0327 19:19:17.452498       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0327 19:19:17.452603       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0327 19:19:17.717333       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0327 19:19:17.717452       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0327 19:19:17.984817       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0327 19:19:17.984870       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0327 19:19:18.099347       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0327 19:19:18.099416       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0327 19:19:18.402310       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0327 19:19:18.402350       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0327 19:19:18.459934       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0327 19:19:18.459970       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0327 19:19:18.885541       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0327 19:19:18.885578       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0327 19:19:18.939207       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0327 19:19:18.939247       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0327 19:19:24.580549       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0327 19:19:24.580586       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0327 19:19:25.061859       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0327 19:19:25.061991       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0327 19:19:25.083700       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0327 19:19:25.083827       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0327 19:19:25.267974       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0327 19:19:25.268112       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0327 19:19:26.583283       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 27 19:19:57 ha-738145 kubelet[760]: I0327 19:19:57.688655     760 scope.go:117] "RemoveContainer" containerID="06b0b32a0231f21b97c8aa51969b27a81e049bd9a4b3e4e9c766e19370cc1198"
	Mar 27 19:19:57 ha-738145 kubelet[760]: I0327 19:19:57.688926     760 scope.go:117] "RemoveContainer" containerID="a4eefcde3ca3f80823dc6cfdef63472f136004643edc58044d97e854de4286a7"
	Mar 27 19:19:57 ha-738145 kubelet[760]: E0327 19:19:57.689413     760 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-738145_kube-system(2b99ae3fec7fa351592d7292bd228432)\"" pod="kube-system/kube-controller-manager-ha-738145" podUID="2b99ae3fec7fa351592d7292bd228432"
	Mar 27 19:20:00 ha-738145 kubelet[760]: I0327 19:20:00.780259     760 scope.go:117] "RemoveContainer" containerID="a4eefcde3ca3f80823dc6cfdef63472f136004643edc58044d97e854de4286a7"
	Mar 27 19:20:00 ha-738145 kubelet[760]: E0327 19:20:00.780809     760 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-738145_kube-system(2b99ae3fec7fa351592d7292bd228432)\"" pod="kube-system/kube-controller-manager-ha-738145" podUID="2b99ae3fec7fa351592d7292bd228432"
	Mar 27 19:20:04 ha-738145 kubelet[760]: I0327 19:20:04.146937     760 scope.go:117] "RemoveContainer" containerID="a4eefcde3ca3f80823dc6cfdef63472f136004643edc58044d97e854de4286a7"
	Mar 27 19:20:04 ha-738145 kubelet[760]: E0327 19:20:04.147516     760 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-738145_kube-system(2b99ae3fec7fa351592d7292bd228432)\"" pod="kube-system/kube-controller-manager-ha-738145" podUID="2b99ae3fec7fa351592d7292bd228432"
	Mar 27 19:20:09 ha-738145 kubelet[760]: I0327 19:20:09.719777     760 scope.go:117] "RemoveContainer" containerID="891449c4ece5df001ab9dee515abc84aada2977bd9854926b984db464ae2cbb4"
	Mar 27 19:20:09 ha-738145 kubelet[760]: I0327 19:20:09.720978     760 status_manager.go:853] "Failed to get status for pod" podUID="c05e3e60dac0a01607b8d79fe63687ea" pod="kube-system/kube-apiserver-ha-738145" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-738145\": dial tcp 192.168.49.254:8443: connect: connection refused"
	Mar 27 19:20:09 ha-738145 kubelet[760]: E0327 19:20:09.724799     760 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-738145.17c0b53f7cd313eb\": dial tcp 192.168.49.254:8443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-ha-738145.17c0b53f7cd313eb  kube-system   2620 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-738145,UID:c05e3e60dac0a01607b8d79fe63687ea,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"registry.k8s.io/kube-apiserver:v1.29.3\" already present on machine,Source:EventSource{Component:kubelet,Host:ha-738145,},FirstTimestamp:2024-03-27 19:19:02 +0000 UTC,LastTimestamp:2024-03-27 19:20:09.722040114 +0000 UTC m=+73.518980580,Count:2,Type:Normal,EventTime:0001-01-01 00:00
:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-738145,}"
	Mar 27 19:20:12 ha-738145 kubelet[760]: E0327 19:20:12.626601     760 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:48802->192.168.49.254:8443: read: connection reset by peer
	Mar 27 19:20:12 ha-738145 kubelet[760]: E0327 19:20:12.626656     760 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:48842->192.168.49.254:8443: read: connection reset by peer
	Mar 27 19:20:12 ha-738145 kubelet[760]: E0327 19:20:12.626679     760 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:48826->192.168.49.254:8443: read: connection reset by peer
	Mar 27 19:20:12 ha-738145 kubelet[760]: E0327 19:20:12.626700     760 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:48822->192.168.49.254:8443: read: connection reset by peer
	Mar 27 19:20:13 ha-738145 kubelet[760]: I0327 19:20:13.745071     760 scope.go:117] "RemoveContainer" containerID="ae3d27785f02ef4664dd5f609c568896ba4e15cd944caba9050cb1d5f0284787"
	Mar 27 19:20:15 ha-738145 kubelet[760]: I0327 19:20:15.752517     760 scope.go:117] "RemoveContainer" containerID="b1ea4e338d175cf3ea2c48e8442526f806c257ce7a70fa63a1a7fcf1398f1ba1"
	Mar 27 19:20:16 ha-738145 kubelet[760]: I0327 19:20:16.517094     760 scope.go:117] "RemoveContainer" containerID="a4eefcde3ca3f80823dc6cfdef63472f136004643edc58044d97e854de4286a7"
	Mar 27 19:20:16 ha-738145 kubelet[760]: E0327 19:20:16.517601     760 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-738145_kube-system(2b99ae3fec7fa351592d7292bd228432)\"" pod="kube-system/kube-controller-manager-ha-738145" podUID="2b99ae3fec7fa351592d7292bd228432"
	Mar 27 19:20:24 ha-738145 kubelet[760]: E0327 19:20:24.589084     760 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-738145?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Mar 27 19:20:24 ha-738145 kubelet[760]: E0327 19:20:24.949868     760 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-738145\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-738145?resourceVersion=0&timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Mar 27 19:20:29 ha-738145 kubelet[760]: I0327 19:20:29.517536     760 scope.go:117] "RemoveContainer" containerID="a4eefcde3ca3f80823dc6cfdef63472f136004643edc58044d97e854de4286a7"
	Mar 27 19:20:34 ha-738145 kubelet[760]: E0327 19:20:34.590021     760 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-738145?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Mar 27 19:20:34 ha-738145 kubelet[760]: E0327 19:20:34.950651     760 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-738145\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-738145?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Mar 27 19:20:44 ha-738145 kubelet[760]: E0327 19:20:44.590538     760 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-738145?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Mar 27 19:20:44 ha-738145 kubelet[760]: E0327 19:20:44.950923     760 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-738145\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-738145?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-738145 -n ha-738145
helpers_test.go:261: (dbg) Run:  kubectl --context ha-738145 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (124.13s)

                                                
                                    

Test pass (301/335)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 12.11
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.29.3/json-events 7.49
13 TestDownloadOnly/v1.29.3/preload-exists 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.09
18 TestDownloadOnly/v1.29.3/DeleteAll 0.2
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.30.0-beta.0/json-events 12.93
22 TestDownloadOnly/v1.30.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.30.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.30.0-beta.0/DeleteAll 0.2
28 TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.55
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.1
36 TestAddons/Setup 163.38
38 TestAddons/parallel/Registry 16.35
40 TestAddons/parallel/InspektorGadget 11.86
41 TestAddons/parallel/MetricsServer 6.76
44 TestAddons/parallel/CSI 68.54
45 TestAddons/parallel/Headlamp 11.96
46 TestAddons/parallel/CloudSpanner 5.56
47 TestAddons/parallel/LocalPath 53.27
48 TestAddons/parallel/NvidiaDevicePlugin 5.5
49 TestAddons/parallel/Yakd 6
52 TestAddons/serial/GCPAuth/Namespaces 0.17
53 TestAddons/StoppedEnableDisable 12.24
54 TestCertOptions 45.73
55 TestCertExpiration 253.44
57 TestForceSystemdFlag 39.15
58 TestForceSystemdEnv 42.05
64 TestErrorSpam/setup 29.45
65 TestErrorSpam/start 0.98
66 TestErrorSpam/status 1.02
67 TestErrorSpam/pause 1.72
68 TestErrorSpam/unpause 1.74
69 TestErrorSpam/stop 1.46
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 73.07
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 29.84
76 TestFunctional/serial/KubeContext 0.07
77 TestFunctional/serial/KubectlGetPods 0.1
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.65
81 TestFunctional/serial/CacheCmd/cache/add_local 1.1
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.37
86 TestFunctional/serial/CacheCmd/cache/delete 0.16
87 TestFunctional/serial/MinikubeKubectlCmd 0.16
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
89 TestFunctional/serial/ExtraConfig 65.1
90 TestFunctional/serial/ComponentHealth 0.1
91 TestFunctional/serial/LogsCmd 1.69
92 TestFunctional/serial/LogsFileCmd 1.7
93 TestFunctional/serial/InvalidService 4.72
95 TestFunctional/parallel/ConfigCmd 0.52
96 TestFunctional/parallel/DashboardCmd 14.81
97 TestFunctional/parallel/DryRun 0.63
98 TestFunctional/parallel/InternationalLanguage 0.27
99 TestFunctional/parallel/StatusCmd 1.18
103 TestFunctional/parallel/ServiceCmdConnect 11.64
104 TestFunctional/parallel/AddonsCmd 0.18
105 TestFunctional/parallel/PersistentVolumeClaim 24.03
107 TestFunctional/parallel/SSHCmd 0.7
108 TestFunctional/parallel/CpCmd 2.43
110 TestFunctional/parallel/FileSync 0.33
111 TestFunctional/parallel/CertSync 2.02
115 TestFunctional/parallel/NodeLabels 0.11
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.61
119 TestFunctional/parallel/License 0.33
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.54
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
131 TestFunctional/parallel/ServiceCmd/DeployApp 6.24
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
133 TestFunctional/parallel/ProfileCmd/profile_list 0.41
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
135 TestFunctional/parallel/MountCmd/any-port 7.31
136 TestFunctional/parallel/ServiceCmd/List 0.5
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.59
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
139 TestFunctional/parallel/ServiceCmd/Format 0.38
140 TestFunctional/parallel/ServiceCmd/URL 0.38
141 TestFunctional/parallel/MountCmd/specific-port 2.15
142 TestFunctional/parallel/MountCmd/VerifyCleanup 1.84
143 TestFunctional/parallel/Version/short 0.06
144 TestFunctional/parallel/Version/components 1.16
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
149 TestFunctional/parallel/ImageCommands/ImageBuild 2.76
150 TestFunctional/parallel/ImageCommands/Setup 2.5
151 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.94
152 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.31
153 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
154 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.25
155 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
156 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.28
157 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.89
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
159 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.24
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.95
161 TestFunctional/delete_addon-resizer_images 0.08
162 TestFunctional/delete_my-image_image 0.01
163 TestFunctional/delete_minikube_cached_images 0.01
167 TestMultiControlPlane/serial/StartCluster 162.3
168 TestMultiControlPlane/serial/DeployApp 6.86
169 TestMultiControlPlane/serial/PingHostFromPods 1.7
170 TestMultiControlPlane/serial/AddWorkerNode 51.73
171 TestMultiControlPlane/serial/NodeLabels 0.12
172 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.8
173 TestMultiControlPlane/serial/CopyFile 18.97
174 TestMultiControlPlane/serial/StopSecondaryNode 12.79
175 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.58
176 TestMultiControlPlane/serial/RestartSecondaryNode 23.44
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 5.97
178 TestMultiControlPlane/serial/RestartClusterKeepsNodes 171.63
179 TestMultiControlPlane/serial/DeleteSecondaryNode 12.96
180 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.54
181 TestMultiControlPlane/serial/StopCluster 35.72
183 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.54
184 TestMultiControlPlane/serial/AddSecondaryNode 60.14
185 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.77
189 TestJSONOutput/start/Command 76.5
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.69
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.65
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.91
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.24
214 TestKicCustomNetwork/create_custom_network 47.41
215 TestKicCustomNetwork/use_default_bridge_network 36.19
216 TestKicExistingNetwork 36.14
217 TestKicCustomSubnet 35.48
218 TestKicStaticIP 35.79
219 TestMainNoArgs 0.06
220 TestMinikubeProfile 67.99
223 TestMountStart/serial/StartWithMountFirst 6.58
224 TestMountStart/serial/VerifyMountFirst 0.27
225 TestMountStart/serial/StartWithMountSecond 7.06
226 TestMountStart/serial/VerifyMountSecond 0.26
227 TestMountStart/serial/DeleteFirst 1.59
228 TestMountStart/serial/VerifyMountPostDelete 0.27
229 TestMountStart/serial/Stop 1.2
230 TestMountStart/serial/RestartStopped 8.19
231 TestMountStart/serial/VerifyMountPostStop 0.27
234 TestMultiNode/serial/FreshStart2Nodes 128.89
235 TestMultiNode/serial/DeployApp2Nodes 5.09
236 TestMultiNode/serial/PingHostFrom2Pods 1.06
237 TestMultiNode/serial/AddNode 16.55
238 TestMultiNode/serial/MultiNodeLabels 0.09
239 TestMultiNode/serial/ProfileList 0.33
240 TestMultiNode/serial/CopyFile 10.28
241 TestMultiNode/serial/StopNode 2.28
242 TestMultiNode/serial/StartAfterStop 10.1
243 TestMultiNode/serial/RestartKeepsNodes 107.49
244 TestMultiNode/serial/DeleteNode 5.69
245 TestMultiNode/serial/StopMultiNode 23.82
246 TestMultiNode/serial/RestartMultiNode 57.05
247 TestMultiNode/serial/ValidateNameConflict 37.94
252 TestPreload 123.97
254 TestScheduledStopUnix 104.84
257 TestInsufficientStorage 10.31
258 TestRunningBinaryUpgrade 80.27
260 TestKubernetesUpgrade 381.7
261 TestMissingContainerUpgrade 150.98
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
265 TestPause/serial/Start 58.85
266 TestNoKubernetes/serial/StartWithK8s 42.33
267 TestNoKubernetes/serial/StartWithStopK8s 6.75
268 TestNoKubernetes/serial/Start 6.44
269 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
270 TestNoKubernetes/serial/ProfileList 0.95
271 TestNoKubernetes/serial/Stop 1.21
272 TestNoKubernetes/serial/StartNoArgs 6.82
273 TestPause/serial/SecondStartNoReconfiguration 28.03
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
275 TestPause/serial/Pause 1.02
276 TestPause/serial/VerifyStatus 0.42
277 TestPause/serial/Unpause 0.88
278 TestPause/serial/PauseAgain 1.4
279 TestPause/serial/DeletePaused 4.16
280 TestPause/serial/VerifyDeletedResources 0.16
281 TestStoppedBinaryUpgrade/Setup 1.27
282 TestStoppedBinaryUpgrade/Upgrade 83.76
283 TestStoppedBinaryUpgrade/MinikubeLogs 1.24
298 TestNetworkPlugins/group/false 4.93
303 TestStartStop/group/old-k8s-version/serial/FirstStart 160.89
304 TestStartStop/group/old-k8s-version/serial/DeployApp 8.49
305 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.05
306 TestStartStop/group/old-k8s-version/serial/Stop 12.03
307 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
308 TestStartStop/group/old-k8s-version/serial/SecondStart 52.27
310 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 82.11
311 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 18.01
312 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
313 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.46
314 TestStartStop/group/old-k8s-version/serial/Pause 3.58
316 TestStartStop/group/embed-certs/serial/FirstStart 77.71
317 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.67
318 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.84
319 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.05
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
321 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 299.06
322 TestStartStop/group/embed-certs/serial/DeployApp 8.55
323 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.41
324 TestStartStop/group/embed-certs/serial/Stop 12.56
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
326 TestStartStop/group/embed-certs/serial/SecondStart 289.26
327 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
328 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
329 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
330 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.51
332 TestStartStop/group/no-preload/serial/FirstStart 64.59
333 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
334 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.11
335 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
336 TestStartStop/group/embed-certs/serial/Pause 4.11
338 TestStartStop/group/newest-cni/serial/FirstStart 52.77
339 TestStartStop/group/no-preload/serial/DeployApp 9.43
340 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.3
341 TestStartStop/group/no-preload/serial/Stop 12.03
342 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.43
343 TestStartStop/group/no-preload/serial/SecondStart 279.16
344 TestStartStop/group/newest-cni/serial/DeployApp 0
345 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.15
346 TestStartStop/group/newest-cni/serial/Stop 1.25
347 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
348 TestStartStop/group/newest-cni/serial/SecondStart 22.58
349 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
352 TestStartStop/group/newest-cni/serial/Pause 2.94
353 TestNetworkPlugins/group/auto/Start 86.29
354 TestNetworkPlugins/group/auto/KubeletFlags 0.32
355 TestNetworkPlugins/group/auto/NetCatPod 11.31
356 TestNetworkPlugins/group/auto/DNS 0.2
357 TestNetworkPlugins/group/auto/Localhost 0.19
358 TestNetworkPlugins/group/auto/HairPin 0.18
359 TestNetworkPlugins/group/kindnet/Start 78.92
360 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
361 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
362 TestNetworkPlugins/group/kindnet/NetCatPod 11.26
363 TestNetworkPlugins/group/kindnet/DNS 0.18
364 TestNetworkPlugins/group/kindnet/Localhost 0.16
365 TestNetworkPlugins/group/kindnet/HairPin 0.16
366 TestNetworkPlugins/group/calico/Start 73.26
367 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
368 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.15
369 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.31
370 TestStartStop/group/no-preload/serial/Pause 4.61
371 TestNetworkPlugins/group/custom-flannel/Start 71.71
372 TestNetworkPlugins/group/calico/ControllerPod 6.01
373 TestNetworkPlugins/group/calico/KubeletFlags 0.38
374 TestNetworkPlugins/group/calico/NetCatPod 10.56
375 TestNetworkPlugins/group/calico/DNS 0.37
376 TestNetworkPlugins/group/calico/Localhost 0.16
377 TestNetworkPlugins/group/calico/HairPin 0.18
378 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.47
379 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.36
380 TestNetworkPlugins/group/custom-flannel/DNS 0.22
381 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
382 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
383 TestNetworkPlugins/group/enable-default-cni/Start 92.91
384 TestNetworkPlugins/group/flannel/Start 70.76
385 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.42
386 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.28
387 TestNetworkPlugins/group/flannel/ControllerPod 6.01
388 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
389 TestNetworkPlugins/group/flannel/NetCatPod 10.3
390 TestNetworkPlugins/group/enable-default-cni/DNS 0.28
391 TestNetworkPlugins/group/enable-default-cni/Localhost 0.23
392 TestNetworkPlugins/group/enable-default-cni/HairPin 0.24
393 TestNetworkPlugins/group/flannel/DNS 0.22
394 TestNetworkPlugins/group/flannel/Localhost 0.2
395 TestNetworkPlugins/group/flannel/HairPin 0.22
396 TestNetworkPlugins/group/bridge/Start 84.57
397 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
398 TestNetworkPlugins/group/bridge/NetCatPod 10.27
399 TestNetworkPlugins/group/bridge/DNS 0.19
400 TestNetworkPlugins/group/bridge/Localhost 0.16
401 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (12.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-837463 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-837463 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (12.108456121s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (12.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-837463
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-837463: exit status 85 (89.495119ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-837463 | jenkins | v1.33.0-beta.0 | 27 Mar 24 18:58 UTC |          |
	|         | -p download-only-837463        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|         | --driver=docker                |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 18:58:15
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 18:58:15.077846  567628 out.go:291] Setting OutFile to fd 1 ...
	I0327 18:58:15.078080  567628 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 18:58:15.078112  567628 out.go:304] Setting ErrFile to fd 2...
	I0327 18:58:15.078136  567628 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 18:58:15.078432  567628 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-562206/.minikube/bin
	W0327 18:58:15.078616  567628 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18517-562206/.minikube/config/config.json: open /home/jenkins/minikube-integration/18517-562206/.minikube/config/config.json: no such file or directory
	I0327 18:58:15.079154  567628 out.go:298] Setting JSON to true
	I0327 18:58:15.080170  567628 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9633,"bootTime":1711556262,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0327 18:58:15.080289  567628 start.go:139] virtualization:  
	I0327 18:58:15.083681  567628 out.go:97] [download-only-837463] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0327 18:58:15.085680  567628 out.go:169] MINIKUBE_LOCATION=18517
	W0327 18:58:15.083940  567628 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18517-562206/.minikube/cache/preloaded-tarball: no such file or directory
	I0327 18:58:15.083985  567628 notify.go:220] Checking for updates...
	I0327 18:58:15.090096  567628 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 18:58:15.092111  567628 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18517-562206/kubeconfig
	I0327 18:58:15.094196  567628 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18517-562206/.minikube
	I0327 18:58:15.095785  567628 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0327 18:58:15.099608  567628 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 18:58:15.099977  567628 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 18:58:15.122046  567628 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0327 18:58:15.122168  567628 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 18:58:15.186954  567628 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-03-27 18:58:15.176228531 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 18:58:15.187079  567628 docker.go:295] overlay module found
	I0327 18:58:15.188791  567628 out.go:97] Using the docker driver based on user configuration
	I0327 18:58:15.188833  567628 start.go:297] selected driver: docker
	I0327 18:58:15.188840  567628 start.go:901] validating driver "docker" against <nil>
	I0327 18:58:15.188941  567628 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 18:58:15.245694  567628 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-03-27 18:58:15.235775528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 18:58:15.245874  567628 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 18:58:15.246205  567628 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0327 18:58:15.246367  567628 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 18:58:15.248464  567628 out.go:169] Using Docker driver with root privileges
	I0327 18:58:15.250418  567628 cni.go:84] Creating CNI manager for ""
	I0327 18:58:15.250443  567628 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0327 18:58:15.250453  567628 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0327 18:58:15.250536  567628 start.go:340] cluster config:
	{Name:download-only-837463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-837463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 18:58:15.252554  567628 out.go:97] Starting "download-only-837463" primary control-plane node in "download-only-837463" cluster
	I0327 18:58:15.252583  567628 cache.go:121] Beginning downloading kic base image for docker with crio
	I0327 18:58:15.254898  567628 out.go:97] Pulling base image v0.0.43-beta.0 ...
	I0327 18:58:15.254930  567628 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0327 18:58:15.255106  567628 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local docker daemon
	I0327 18:58:15.269445  567628 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 to local cache
	I0327 18:58:15.269645  567628 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local cache directory
	I0327 18:58:15.269743  567628 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 to local cache
	I0327 18:58:15.349277  567628 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0327 18:58:15.349301  567628 cache.go:56] Caching tarball of preloaded images
	I0327 18:58:15.349467  567628 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0327 18:58:15.352611  567628 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0327 18:58:15.352635  567628 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0327 18:58:15.507639  567628 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/18517-562206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0327 18:58:20.733041  567628 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 as a tarball
	I0327 18:58:22.754682  567628 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0327 18:58:22.754789  567628 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18517-562206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0327 18:58:23.848375  567628 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0327 18:58:23.848740  567628 profile.go:142] Saving config to /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/download-only-837463/config.json ...
	I0327 18:58:23.848777  567628 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/download-only-837463/config.json: {Name:mk2698f34bf0ccc2a83719108d1eb21475c57673 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 18:58:23.848978  567628 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0327 18:58:23.849163  567628 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/18517-562206/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-837463 host does not exist
	  To start a cluster, run: "minikube start -p download-only-837463"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-837463
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (7.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-541014 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-541014 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.485476727s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (7.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-541014
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-541014: exit status 85 (91.966807ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-837463 | jenkins | v1.33.0-beta.0 | 27 Mar 24 18:58 UTC |                     |
	|         | -p download-only-837463        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	|         | --driver=docker                |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 18:58 UTC | 27 Mar 24 18:58 UTC |
	| delete  | -p download-only-837463        | download-only-837463 | jenkins | v1.33.0-beta.0 | 27 Mar 24 18:58 UTC | 27 Mar 24 18:58 UTC |
	| start   | -o=json --download-only        | download-only-541014 | jenkins | v1.33.0-beta.0 | 27 Mar 24 18:58 UTC |                     |
	|         | -p download-only-541014        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	|         | --driver=docker                |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 18:58:27
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 18:58:27.595238  567795 out.go:291] Setting OutFile to fd 1 ...
	I0327 18:58:27.595371  567795 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 18:58:27.595418  567795 out.go:304] Setting ErrFile to fd 2...
	I0327 18:58:27.595426  567795 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 18:58:27.595736  567795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-562206/.minikube/bin
	I0327 18:58:27.596151  567795 out.go:298] Setting JSON to true
	I0327 18:58:27.596955  567795 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9645,"bootTime":1711556262,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0327 18:58:27.597025  567795 start.go:139] virtualization:  
	I0327 18:58:27.599649  567795 out.go:97] [download-only-541014] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0327 18:58:27.602103  567795 out.go:169] MINIKUBE_LOCATION=18517
	I0327 18:58:27.599903  567795 notify.go:220] Checking for updates...
	I0327 18:58:27.604684  567795 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 18:58:27.606826  567795 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18517-562206/kubeconfig
	I0327 18:58:27.609115  567795 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18517-562206/.minikube
	I0327 18:58:27.611298  567795 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0327 18:58:27.615060  567795 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 18:58:27.615374  567795 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 18:58:27.634283  567795 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0327 18:58:27.634399  567795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 18:58:27.698492  567795 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-27 18:58:27.68945793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 18:58:27.698609  567795 docker.go:295] overlay module found
	I0327 18:58:27.701155  567795 out.go:97] Using the docker driver based on user configuration
	I0327 18:58:27.701180  567795 start.go:297] selected driver: docker
	I0327 18:58:27.701187  567795 start.go:901] validating driver "docker" against <nil>
	I0327 18:58:27.701293  567795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 18:58:27.757825  567795 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-27 18:58:27.749248913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 18:58:27.758019  567795 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 18:58:27.758303  567795 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0327 18:58:27.758464  567795 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 18:58:27.761709  567795 out.go:169] Using Docker driver with root privileges
	I0327 18:58:27.764055  567795 cni.go:84] Creating CNI manager for ""
	I0327 18:58:27.764077  567795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0327 18:58:27.764089  567795 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0327 18:58:27.764168  567795 start.go:340] cluster config:
	{Name:download-only-541014 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-541014 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 18:58:27.766481  567795 out.go:97] Starting "download-only-541014" primary control-plane node in "download-only-541014" cluster
	I0327 18:58:27.766505  567795 cache.go:121] Beginning downloading kic base image for docker with crio
	I0327 18:58:27.768743  567795 out.go:97] Pulling base image v0.0.43-beta.0 ...
	I0327 18:58:27.768770  567795 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0327 18:58:27.768925  567795 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local docker daemon
	I0327 18:58:27.781404  567795 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 to local cache
	I0327 18:58:27.781532  567795 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local cache directory
	I0327 18:58:27.781558  567795 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local cache directory, skipping pull
	I0327 18:58:27.781564  567795 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 exists in cache, skipping pull
	I0327 18:58:27.781572  567795 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 as a tarball
	I0327 18:58:27.864557  567795 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-arm64.tar.lz4
	I0327 18:58:27.864596  567795 cache.go:56] Caching tarball of preloaded images
	I0327 18:58:27.864773  567795 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0327 18:58:27.867164  567795 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0327 18:58:27.867185  567795 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-arm64.tar.lz4 ...
	I0327 18:58:28.015395  567795 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-arm64.tar.lz4?checksum=md5:84fdcab7b9f3aeb3e0da1cc4f5f14b7b -> /home/jenkins/minikube-integration/18517-562206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-541014 host does not exist
	  To start a cluster, run: "minikube start -p download-only-541014"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-541014
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/json-events (12.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-696066 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-696066 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (12.924952388s)
--- PASS: TestDownloadOnly/v1.30.0-beta.0/json-events (12.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-696066
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-696066: exit status 85 (83.368507ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-837463 | jenkins | v1.33.0-beta.0 | 27 Mar 24 18:58 UTC |                     |
	|         | -p download-only-837463             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |                |                     |                     |
	|         | --container-runtime=crio            |                      |         |                |                     |                     |
	|         | --driver=docker                     |                      |         |                |                     |                     |
	|         | --container-runtime=crio            |                      |         |                |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 18:58 UTC | 27 Mar 24 18:58 UTC |
	| delete  | -p download-only-837463             | download-only-837463 | jenkins | v1.33.0-beta.0 | 27 Mar 24 18:58 UTC | 27 Mar 24 18:58 UTC |
	| start   | -o=json --download-only             | download-only-541014 | jenkins | v1.33.0-beta.0 | 27 Mar 24 18:58 UTC |                     |
	|         | -p download-only-541014             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3        |                      |         |                |                     |                     |
	|         | --container-runtime=crio            |                      |         |                |                     |                     |
	|         | --driver=docker                     |                      |         |                |                     |                     |
	|         | --container-runtime=crio            |                      |         |                |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 18:58 UTC | 27 Mar 24 18:58 UTC |
	| delete  | -p download-only-541014             | download-only-541014 | jenkins | v1.33.0-beta.0 | 27 Mar 24 18:58 UTC | 27 Mar 24 18:58 UTC |
	| start   | -o=json --download-only             | download-only-696066 | jenkins | v1.33.0-beta.0 | 27 Mar 24 18:58 UTC |                     |
	|         | -p download-only-696066             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0 |                      |         |                |                     |                     |
	|         | --container-runtime=crio            |                      |         |                |                     |                     |
	|         | --driver=docker                     |                      |         |                |                     |                     |
	|         | --container-runtime=crio            |                      |         |                |                     |                     |
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 18:58:35
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 18:58:35.503631  567961 out.go:291] Setting OutFile to fd 1 ...
	I0327 18:58:35.503770  567961 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 18:58:35.503796  567961 out.go:304] Setting ErrFile to fd 2...
	I0327 18:58:35.503802  567961 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 18:58:35.504055  567961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-562206/.minikube/bin
	I0327 18:58:35.504440  567961 out.go:298] Setting JSON to true
	I0327 18:58:35.505304  567961 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9653,"bootTime":1711556262,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0327 18:58:35.505372  567961 start.go:139] virtualization:  
	I0327 18:58:35.508386  567961 out.go:97] [download-only-696066] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0327 18:58:35.510848  567961 out.go:169] MINIKUBE_LOCATION=18517
	I0327 18:58:35.508675  567961 notify.go:220] Checking for updates...
	I0327 18:58:35.515584  567961 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 18:58:35.519090  567961 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18517-562206/kubeconfig
	I0327 18:58:35.520913  567961 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18517-562206/.minikube
	I0327 18:58:35.523078  567961 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0327 18:58:35.527939  567961 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 18:58:35.528231  567961 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 18:58:35.547211  567961 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0327 18:58:35.547323  567961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 18:58:35.613135  567961 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-27 18:58:35.60396525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 18:58:35.613250  567961 docker.go:295] overlay module found
	I0327 18:58:35.615709  567961 out.go:97] Using the docker driver based on user configuration
	I0327 18:58:35.615734  567961 start.go:297] selected driver: docker
	I0327 18:58:35.615741  567961 start.go:901] validating driver "docker" against <nil>
	I0327 18:58:35.615838  567961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 18:58:35.672682  567961 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-27 18:58:35.66281946 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 18:58:35.672855  567961 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 18:58:35.673129  567961 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0327 18:58:35.673294  567961 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 18:58:35.675695  567961 out.go:169] Using Docker driver with root privileges
	I0327 18:58:35.678009  567961 cni.go:84] Creating CNI manager for ""
	I0327 18:58:35.678032  567961 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0327 18:58:35.678042  567961 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0327 18:58:35.678130  567961 start.go:340] cluster config:
	{Name:download-only-696066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:download-only-696066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 18:58:35.680315  567961 out.go:97] Starting "download-only-696066" primary control-plane node in "download-only-696066" cluster
	I0327 18:58:35.680344  567961 cache.go:121] Beginning downloading kic base image for docker with crio
	I0327 18:58:35.682324  567961 out.go:97] Pulling base image v0.0.43-beta.0 ...
	I0327 18:58:35.682350  567961 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0327 18:58:35.682514  567961 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local docker daemon
	I0327 18:58:35.695037  567961 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 to local cache
	I0327 18:58:35.695163  567961 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local cache directory
	I0327 18:58:35.695198  567961 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 in local cache directory, skipping pull
	I0327 18:58:35.695203  567961 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 exists in cache, skipping pull
	I0327 18:58:35.695211  567961 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 as a tarball
	I0327 18:58:35.785939  567961 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I0327 18:58:35.785965  567961 cache.go:56] Caching tarball of preloaded images
	I0327 18:58:35.786147  567961 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0327 18:58:35.788374  567961 out.go:97] Downloading Kubernetes v1.30.0-beta.0 preload ...
	I0327 18:58:35.788412  567961 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-cri-o-overlay-arm64.tar.lz4 ...
	I0327 18:58:35.936741  567961 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:869a9f80cd246e74d899316f2e05b887 -> /home/jenkins/minikube-integration/18517-562206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I0327 18:58:43.565391  567961 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-cri-o-overlay-arm64.tar.lz4 ...
	I0327 18:58:43.565524  567961 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18517-562206/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-cri-o-overlay-arm64.tar.lz4 ...
	I0327 18:58:44.429324  567961 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-beta.0 on crio
	I0327 18:58:44.429715  567961 profile.go:142] Saving config to /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/download-only-696066/config.json ...
	I0327 18:58:44.429751  567961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/download-only-696066/config.json: {Name:mkce57e91cd44a3cd94c6079f28fe7d2de6b13a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 18:58:44.429964  567961 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0327 18:58:44.430636  567961 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-beta.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/18517-562206/.minikube/cache/linux/arm64/v1.30.0-beta.0/kubectl
	
	
	* The control-plane node download-only-696066 host does not exist
	  To start a cluster, run: "minikube start -p download-only-696066"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-696066
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-289584 --alsologtostderr --binary-mirror http://127.0.0.1:39875 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-289584" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-289584
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-408183
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-408183: exit status 85 (94.599331ms)

                                                
                                                
-- stdout --
	* Profile "addons-408183" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-408183"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-408183
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-408183: exit status 85 (99.285913ms)

                                                
                                                
-- stdout --
	* Profile "addons-408183" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-408183"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/Setup (163.38s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-408183 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-408183 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m43.382585899s)
--- PASS: TestAddons/Setup (163.38s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 41.187614ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-9wfw5" [6c3f112a-7577-41d6-b765-1344e134d816] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005623655s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-s2lkz" [ecdacb2c-048d-4a04-b2d2-648381ae630e] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005180949s
addons_test.go:340: (dbg) Run:  kubectl --context addons-408183 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-408183 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-408183 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.259431292s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-408183 ip
2024/03/27 19:01:49 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-408183 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.35s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.86s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-x25n6" [0a185611-2218-4cda-9001-d261e945c773] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00385163s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-408183
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-408183: (5.858490802s)
--- PASS: TestAddons/parallel/InspektorGadget (11.86s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.76s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 16.175473ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-qwvb6" [c98ba762-4ee9-431f-8331-f7b0859f18c0] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.006293559s
addons_test.go:415: (dbg) Run:  kubectl --context addons-408183 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-408183 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.76s)

                                                
                                    
x
+
TestAddons/parallel/CSI (68.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 41.832738ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-408183 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-408183 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a362047a-1880-491b-8d0b-16b31ece4259] Pending
helpers_test.go:344: "task-pv-pod" [a362047a-1880-491b-8d0b-16b31ece4259] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a362047a-1880-491b-8d0b-16b31ece4259] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004802916s
addons_test.go:584: (dbg) Run:  kubectl --context addons-408183 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-408183 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-408183 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-408183 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-408183 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-408183 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-408183 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6756440b-aba0-495f-9a51-b601bb4411ca] Pending
helpers_test.go:344: "task-pv-pod-restore" [6756440b-aba0-495f-9a51-b601bb4411ca] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6756440b-aba0-495f-9a51-b601bb4411ca] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003612967s
addons_test.go:626: (dbg) Run:  kubectl --context addons-408183 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-408183 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-408183 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-408183 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-408183 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.788878336s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-408183 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (68.54s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-408183 --alsologtostderr -v=1
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5b77dbd7c4-tss7p" [be7f4d47-238d-4913-b44f-71942ae09a88] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5b77dbd7c4-tss7p" [be7f4d47-238d-4913-b44f-71942ae09a88] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004296813s
--- PASS: TestAddons/parallel/Headlamp (11.96s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-7kfjf" [bf71ace9-2392-48f0-9c7b-05c75de86ec4] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004318923s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-408183
--- PASS: TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.27s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-408183 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-408183 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408183 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [f77a8e08-0072-433a-9d00-ee06b93f7e06] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [f77a8e08-0072-433a-9d00-ee06b93f7e06] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [f77a8e08-0072-433a-9d00-ee06b93f7e06] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004658531s
addons_test.go:891: (dbg) Run:  kubectl --context addons-408183 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-408183 ssh "cat /opt/local-path-provisioner/pvc-202e28ad-5d38-4c60-aad7-1dea41135b4e_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-408183 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-408183 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-408183 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-408183 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.176629403s)
--- PASS: TestAddons/parallel/LocalPath (53.27s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-qkdl4" [78361e37-2128-4937-8e3a-361cd2184fa5] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004245713s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-408183
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-4swzk" [e88eaf0b-49f6-451a-bad0-389ae453aac6] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003946701s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-408183 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-408183 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.24s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-408183
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-408183: (11.932388182s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-408183
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-408183
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-408183
--- PASS: TestAddons/StoppedEnableDisable (12.24s)

                                                
                                    
x
+
TestCertOptions (45.73s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-558057 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-558057 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (42.65629348s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-558057 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-558057 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-558057 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-558057" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-558057
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-558057: (2.178486961s)
--- PASS: TestCertOptions (45.73s)

                                                
                                    
x
+
TestCertExpiration (253.44s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-119203 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
E0327 19:47:20.504013  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/functional-990825/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-119203 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (38.813649192s)
E0327 19:49:17.458620  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/functional-990825/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-119203 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-119203 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (32.159962622s)
helpers_test.go:175: Cleaning up "cert-expiration-119203" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-119203
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-119203: (2.463654571s)
--- PASS: TestCertExpiration (253.44s)

                                                
                                    
x
+
TestForceSystemdFlag (39.15s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-047581 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-047581 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.521993523s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-047581 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-047581" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-047581
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-047581: (2.340145713s)
--- PASS: TestForceSystemdFlag (39.15s)

                                                
                                    
x
+
TestForceSystemdEnv (42.05s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-071420 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0327 19:46:33.833271  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-071420 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.980330564s)
helpers_test.go:175: Cleaning up "force-systemd-env-071420" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-071420
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-071420: (4.072740595s)
--- PASS: TestForceSystemdEnv (42.05s)

                                                
                                    
x
+
TestErrorSpam/setup (29.45s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-896532 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-896532 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-896532 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-896532 --driver=docker  --container-runtime=crio: (29.453140442s)
--- PASS: TestErrorSpam/setup (29.45s)

                                                
                                    
x
+
TestErrorSpam/start (0.98s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-896532 --log_dir /tmp/nospam-896532 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-896532 --log_dir /tmp/nospam-896532 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-896532 --log_dir /tmp/nospam-896532 start --dry-run
--- PASS: TestErrorSpam/start (0.98s)

                                                
                                    
x
+
TestErrorSpam/status (1.02s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-896532 --log_dir /tmp/nospam-896532 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-896532 --log_dir /tmp/nospam-896532 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-896532 --log_dir /tmp/nospam-896532 status
--- PASS: TestErrorSpam/status (1.02s)

                                                
                                    
x
+
TestErrorSpam/pause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-896532 --log_dir /tmp/nospam-896532 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-896532 --log_dir /tmp/nospam-896532 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-896532 --log_dir /tmp/nospam-896532 pause
--- PASS: TestErrorSpam/pause (1.72s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-896532 --log_dir /tmp/nospam-896532 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-896532 --log_dir /tmp/nospam-896532 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-896532 --log_dir /tmp/nospam-896532 unpause
--- PASS: TestErrorSpam/unpause (1.74s)

                                                
                                    
x
+
TestErrorSpam/stop (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-896532 --log_dir /tmp/nospam-896532 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-896532 --log_dir /tmp/nospam-896532 stop: (1.256089119s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-896532 --log_dir /tmp/nospam-896532 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-896532 --log_dir /tmp/nospam-896532 stop
--- PASS: TestErrorSpam/stop (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18517-562206/.minikube/files/etc/test/nested/copy/567623/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (73.07s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-990825 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0327 19:06:33.834561  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
E0327 19:06:33.840241  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
E0327 19:06:33.850469  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
E0327 19:06:33.870743  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
E0327 19:06:33.911000  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
E0327 19:06:33.991257  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
E0327 19:06:34.151594  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
E0327 19:06:34.472132  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
E0327 19:06:35.113167  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
E0327 19:06:36.393373  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
E0327 19:06:38.953581  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
E0327 19:06:44.074459  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
E0327 19:06:54.315201  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
E0327 19:07:14.795848  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-990825 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m13.069319829s)
--- PASS: TestFunctional/serial/StartWithProxy (73.07s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.84s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-990825 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-990825 --alsologtostderr -v=8: (29.842218561s)
functional_test.go:659: soft start took 29.84393909s for "functional-990825" cluster.
--- PASS: TestFunctional/serial/SoftStart (29.84s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-990825 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-990825 cache add registry.k8s.io/pause:3.1: (1.225228529s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 cache add registry.k8s.io/pause:3.3
E0327 19:07:55.756065  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-990825 cache add registry.k8s.io/pause:3.3: (1.216667461s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-990825 cache add registry.k8s.io/pause:latest: (1.210915417s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-990825 /tmp/TestFunctionalserialCacheCmdcacheadd_local910399624/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 cache add minikube-local-cache-test:functional-990825
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 cache delete minikube-local-cache-test:functional-990825
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-990825
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-990825 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (428.593483ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-990825 cache reload: (1.315041522s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 kubectl -- --context functional-990825 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-990825 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (65.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-990825 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-990825 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m5.095977411s)
functional_test.go:757: restart took 1m5.096097944s for "functional-990825" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (65.10s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-990825 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-990825 logs: (1.693468601s)
--- PASS: TestFunctional/serial/LogsCmd (1.69s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 logs --file /tmp/TestFunctionalserialLogsFileCmd1979712649/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-990825 logs --file /tmp/TestFunctionalserialLogsFileCmd1979712649/001/logs.txt: (1.697876382s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.70s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.72s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-990825 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-990825
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-990825: exit status 115 (606.401043ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30738 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-990825 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.72s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-990825 config get cpus: exit status 14 (94.803449ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-990825 config get cpus: exit status 14 (93.130642ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-990825 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-990825 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 593872: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.81s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-990825 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-990825 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (289.023336ms)

                                                
                                                
-- stdout --
	* [functional-990825] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18517-562206/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18517-562206/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 19:09:50.067017  593314 out.go:291] Setting OutFile to fd 1 ...
	I0327 19:09:50.067157  593314 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:09:50.067168  593314 out.go:304] Setting ErrFile to fd 2...
	I0327 19:09:50.067173  593314 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:09:50.067426  593314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-562206/.minikube/bin
	I0327 19:09:50.069353  593314 out.go:298] Setting JSON to false
	I0327 19:09:50.071881  593314 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10328,"bootTime":1711556262,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0327 19:09:50.071957  593314 start.go:139] virtualization:  
	I0327 19:09:50.074716  593314 out.go:177] * [functional-990825] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0327 19:09:50.077081  593314 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 19:09:50.077197  593314 notify.go:220] Checking for updates...
	I0327 19:09:50.081516  593314 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 19:09:50.084149  593314 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18517-562206/kubeconfig
	I0327 19:09:50.086404  593314 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18517-562206/.minikube
	I0327 19:09:50.088596  593314 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0327 19:09:50.090981  593314 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 19:09:50.093964  593314 config.go:182] Loaded profile config "functional-990825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 19:09:50.094778  593314 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 19:09:50.128129  593314 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0327 19:09:50.128259  593314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 19:09:50.236086  593314 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:54 SystemTime:2024-03-27 19:09:50.222450643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 19:09:50.236195  593314 docker.go:295] overlay module found
	I0327 19:09:50.238662  593314 out.go:177] * Using the docker driver based on existing profile
	I0327 19:09:50.240342  593314 start.go:297] selected driver: docker
	I0327 19:09:50.240363  593314 start.go:901] validating driver "docker" against &{Name:functional-990825 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-990825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 19:09:50.240473  593314 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 19:09:50.243141  593314 out.go:177] 
	W0327 19:09:50.245100  593314 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0327 19:09:50.247159  593314 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-990825 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-990825 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-990825 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (267.564593ms)

                                                
                                                
-- stdout --
	* [functional-990825] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18517-562206/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18517-562206/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 19:09:49.790629  593231 out.go:291] Setting OutFile to fd 1 ...
	I0327 19:09:49.790850  593231 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:09:49.790876  593231 out.go:304] Setting ErrFile to fd 2...
	I0327 19:09:49.790896  593231 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:09:49.791285  593231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-562206/.minikube/bin
	I0327 19:09:49.791702  593231 out.go:298] Setting JSON to false
	I0327 19:09:49.792735  593231 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10328,"bootTime":1711556262,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0327 19:09:49.792836  593231 start.go:139] virtualization:  
	I0327 19:09:49.795594  593231 out.go:177] * [functional-990825] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (arm64)
	I0327 19:09:49.798584  593231 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 19:09:49.798709  593231 notify.go:220] Checking for updates...
	I0327 19:09:49.803656  593231 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 19:09:49.806143  593231 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18517-562206/kubeconfig
	I0327 19:09:49.808222  593231 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18517-562206/.minikube
	I0327 19:09:49.811015  593231 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0327 19:09:49.813534  593231 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 19:09:49.816664  593231 config.go:182] Loaded profile config "functional-990825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 19:09:49.817172  593231 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 19:09:49.855719  593231 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0327 19:09:49.855842  593231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 19:09:49.946774  593231 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-03-27 19:09:49.934708131 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 19:09:49.946883  593231 docker.go:295] overlay module found
	I0327 19:09:49.951610  593231 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0327 19:09:49.953727  593231 start.go:297] selected driver: docker
	I0327 19:09:49.953740  593231 start.go:901] validating driver "docker" against &{Name:functional-990825 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-990825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 19:09:49.953851  593231 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 19:09:49.956679  593231 out.go:177] 
	W0327 19:09:49.958744  593231 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0327 19:09:49.961272  593231 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-990825 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-990825 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-hnbkz" [902c8cfb-70f3-4940-86db-0e97e8eb642a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-hnbkz" [902c8cfb-70f3-4940-86db-0e97e8eb642a] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004049318s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:31769
functional_test.go:1671: http://192.168.49.2:31769: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-hnbkz

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31769
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [22a011d9-b933-4bf9-ae7c-b35d41ad73fe] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.006125068s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-990825 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-990825 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-990825 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-990825 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0e19c800-6849-43b4-82c9-df7b83c05014] Pending
helpers_test.go:344: "sp-pod" [0e19c800-6849-43b4-82c9-df7b83c05014] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0e19c800-6849-43b4-82c9-df7b83c05014] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.005029705s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-990825 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-990825 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-990825 delete -f testdata/storage-provisioner/pod.yaml: (1.027473814s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-990825 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [63821550-c384-415d-98a0-69b888e5b0ae] Pending
helpers_test.go:344: "sp-pod" [63821550-c384-415d-98a0-69b888e5b0ae] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.00415238s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-990825 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh -n functional-990825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 cp functional-990825:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3719954861/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh -n functional-990825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh -n functional-990825 "sudo cat /tmp/does/not/exist/cp-test.txt"
E0327 19:09:17.676197  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/CpCmd (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/567623/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh "sudo cat /etc/test/nested/copy/567623/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/567623.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh "sudo cat /etc/ssl/certs/567623.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/567623.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh "sudo cat /usr/share/ca-certificates/567623.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/5676232.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh "sudo cat /etc/ssl/certs/5676232.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/5676232.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh "sudo cat /usr/share/ca-certificates/5676232.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-990825 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-990825 ssh "sudo systemctl is-active docker": exit status 1 (285.759169ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-990825 ssh "sudo systemctl is-active containerd": exit status 1 (320.256567ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-990825 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-990825 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-990825 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-990825 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 591052: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-990825 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-990825 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [b1628451-6ede-4e6c-9973-1068ae9d8b0b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [b1628451-6ede-4e6c-9973-1068ae9d8b0b] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004267281s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-990825 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.83.216 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-990825 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-990825 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-990825 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-mzzrb" [7d831290-6b66-436c-807d-38c97e82248d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-mzzrb" [7d831290-6b66-436c-807d-38c97e82248d] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004956116s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "339.430801ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "66.634579ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "318.440444ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "68.484623ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-990825 /tmp/TestFunctionalparallelMountCmdany-port879141281/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1711566583292088078" to /tmp/TestFunctionalparallelMountCmdany-port879141281/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1711566583292088078" to /tmp/TestFunctionalparallelMountCmdany-port879141281/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1711566583292088078" to /tmp/TestFunctionalparallelMountCmdany-port879141281/001/test-1711566583292088078
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-990825 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (413.186567ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 27 19:09 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 27 19:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 27 19:09 test-1711566583292088078
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh cat /mount-9p/test-1711566583292088078
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-990825 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d8827653-5223-4b3f-8a4a-546fe1a813c3] Pending
helpers_test.go:344: "busybox-mount" [d8827653-5223-4b3f-8a4a-546fe1a813c3] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d8827653-5223-4b3f-8a4a-546fe1a813c3] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d8827653-5223-4b3f-8a4a-546fe1a813c3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004290127s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-990825 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-990825 /tmp/TestFunctionalparallelMountCmdany-port879141281/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 service list -o json
functional_test.go:1490: Took "588.347877ms" to run "out/minikube-linux-arm64 -p functional-990825 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:31521
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:31521
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-990825 /tmp/TestFunctionalparallelMountCmdspecific-port2283360390/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-990825 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (442.012941ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-990825 /tmp/TestFunctionalparallelMountCmdspecific-port2283360390/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-990825 ssh "sudo umount -f /mount-9p": exit status 1 (343.650914ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-990825 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-990825 /tmp/TestFunctionalparallelMountCmdspecific-port2283360390/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-990825 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1070713139/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-990825 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1070713139/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-990825 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1070713139/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-990825 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-990825 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1070713139/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-990825 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1070713139/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-990825 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1070713139/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-990825 version -o=json --components: (1.159837465s)
--- PASS: TestFunctional/parallel/Version/components (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-990825 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-990825
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-990825 image ls --format short --alsologtostderr:
I0327 19:10:17.370038  595678 out.go:291] Setting OutFile to fd 1 ...
I0327 19:10:17.370293  595678 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 19:10:17.370323  595678 out.go:304] Setting ErrFile to fd 2...
I0327 19:10:17.370341  595678 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 19:10:17.370711  595678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-562206/.minikube/bin
I0327 19:10:17.371391  595678 config.go:182] Loaded profile config "functional-990825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0327 19:10:17.371609  595678 config.go:182] Loaded profile config "functional-990825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0327 19:10:17.372288  595678 cli_runner.go:164] Run: docker container inspect functional-990825 --format={{.State.Status}}
I0327 19:10:17.387795  595678 ssh_runner.go:195] Run: systemctl --version
I0327 19:10:17.387863  595678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-990825
I0327 19:10:17.406104  595678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/functional-990825/id_rsa Username:docker}
I0327 19:10:17.498933  595678 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-990825 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/coredns/coredns         | v1.11.1            | 2437cf7621777 | 58.8MB |
| registry.k8s.io/kube-scheduler          | v1.29.3            | 4b51f9f6bc9b9 | 59.2MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 014faa467e297 | 140MB  |
| registry.k8s.io/kube-apiserver          | v1.29.3            | 2581114f5709d | 124MB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| docker.io/library/nginx                 | latest             | 070027a3cbe09 | 196MB  |
| registry.k8s.io/kube-controller-manager | v1.29.3            | 121d70d9a3805 | 119MB  |
| registry.k8s.io/kube-proxy              | v1.29.3            | 0e9b4a0d1e86d | 86.8MB |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4740c1948d3fc | 60.9MB |
| docker.io/library/nginx                 | alpine             | b8c82647e8a25 | 45.4MB |
| gcr.io/google-containers/addon-resizer  | functional-990825  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-990825 image ls --format table --alsologtostderr:
I0327 19:10:17.640872  595736 out.go:291] Setting OutFile to fd 1 ...
I0327 19:10:17.641097  595736 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 19:10:17.641123  595736 out.go:304] Setting ErrFile to fd 2...
I0327 19:10:17.641142  595736 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 19:10:17.641433  595736 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-562206/.minikube/bin
I0327 19:10:17.642141  595736 config.go:182] Loaded profile config "functional-990825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0327 19:10:17.642321  595736 config.go:182] Loaded profile config "functional-990825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0327 19:10:17.642824  595736 cli_runner.go:164] Run: docker container inspect functional-990825 --format={{.State.Status}}
I0327 19:10:17.661215  595736 ssh_runner.go:195] Run: systemctl --version
I0327 19:10:17.661276  595736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-990825
I0327 19:10:17.690164  595736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/functional-990825/id_rsa Username:docker}
I0327 19:10:17.782369  595736 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-990825 image ls --format json --alsologtostderr:
[{"id":"b8c82647e8a2586145e422943ae4c69c9b1600db636e1269efd256360eb396b0","repoDigests":["docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742","docker.io/library/nginx@sha256:fe6e879bfe52091d423aa46efab8899ee4da7fdc7ed7baa558dcabf3823eb0d7"],"repoTags":["docker.io/library/nginx:alpine"],"size":"45393258"},{"id":"4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:fde0f6062db0a3b3323d76a4cde031f0f891b5b79d12be642b7e5aad68f2836f"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"60940831"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f6
1bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-990825"],"size":"34114467"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k
8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"0e9b4a0d1e86d942f5ed93eaf751771e7602104cac5e15256c36967770ad2775","repoDigests":["registry.k8s.io/kube-proxy@sha256:51e1a0d7b1254f98246e4967add615b35d8c25d2bf71e3ff64f7fe7c27fb8d79","registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3"],"size":"86773651"},{"id":"2581114f5709d3459ca39f243fd21fde75f2f60d205ffdcd57b4207c33980794","repoDigests":["registry.k8s.io/kube-apiserver@sha256:cdfd79dbc97fb3da60fefff3622fd35d6772e4db06f523eec4630979073fc611","registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"123925
451"},{"id":"4b51f9f6bc9b9a68473278361df0e8985109b56c7b649532c6bffcab2a8c65fb","repoDigests":["registry.k8s.io/kube-scheduler@sha256:107cad99dfbfbb6192d7cb685fc7702c9798cffb3fd63551fd00ae0009cf4612","registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a"],"repoTags":["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"59175732"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"070027a3cbe09ac697570e31174acc1699701bd0626d2cf71e01623f4
1a10f53","repoDigests":["docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e","docker.io/library/nginx@sha256:757f33a85ed94069cf2e5c4ef4047d0e8d63d567bc7667925f886423f277fb3b"],"repoTags":["docker.io/library/nginx:latest"],"size":"196117976"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"58812704"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b","registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"140414767"},{"id":"8cb2091f603
e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"121d70d9a3805f44c7c587a60d9360495cf9d95129047f4818bb7110ec1ec195","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377b
a2680df25b0b97b3be12fa10f15ad67104","registry.k8s.io/kube-controller-manager@sha256:e89c6fb613c47831235c0758443a7a0b735ff97da7a41f9f820f3db035708c19"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.29.3"],"size":"118747956"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-990825 image ls --format json --alsologtostderr:
I0327 19:10:17.902221  595811 out.go:291] Setting OutFile to fd 1 ...
I0327 19:10:17.902441  595811 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 19:10:17.902466  595811 out.go:304] Setting ErrFile to fd 2...
I0327 19:10:17.902486  595811 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 19:10:17.902760  595811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-562206/.minikube/bin
I0327 19:10:17.903440  595811 config.go:182] Loaded profile config "functional-990825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0327 19:10:17.903615  595811 config.go:182] Loaded profile config "functional-990825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0327 19:10:17.904156  595811 cli_runner.go:164] Run: docker container inspect functional-990825 --format={{.State.Status}}
I0327 19:10:17.923565  595811 ssh_runner.go:195] Run: systemctl --version
I0327 19:10:17.923632  595811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-990825
I0327 19:10:17.941793  595811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/functional-990825/id_rsa Username:docker}
I0327 19:10:18.031319  595811 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-990825 image ls --format yaml --alsologtostderr:
- id: 4b51f9f6bc9b9a68473278361df0e8985109b56c7b649532c6bffcab2a8c65fb
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:107cad99dfbfbb6192d7cb685fc7702c9798cffb3fd63551fd00ae0009cf4612
- registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "59175732"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: b8c82647e8a2586145e422943ae4c69c9b1600db636e1269efd256360eb396b0
repoDigests:
- docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742
- docker.io/library/nginx@sha256:fe6e879bfe52091d423aa46efab8899ee4da7fdc7ed7baa558dcabf3823eb0d7
repoTags:
- docker.io/library/nginx:alpine
size: "45393258"
- id: 121d70d9a3805f44c7c587a60d9360495cf9d95129047f4818bb7110ec1ec195
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104
- registry.k8s.io/kube-controller-manager@sha256:e89c6fb613c47831235c0758443a7a0b735ff97da7a41f9f820f3db035708c19
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "118747956"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
- registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "140414767"
- id: 4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:fde0f6062db0a3b3323d76a4cde031f0f891b5b79d12be642b7e5aad68f2836f
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "60940831"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "58812704"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 2581114f5709d3459ca39f243fd21fde75f2f60d205ffdcd57b4207c33980794
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:cdfd79dbc97fb3da60fefff3622fd35d6772e4db06f523eec4630979073fc611
- registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "123925451"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 070027a3cbe09ac697570e31174acc1699701bd0626d2cf71e01623f41a10f53
repoDigests:
- docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e
- docker.io/library/nginx@sha256:757f33a85ed94069cf2e5c4ef4047d0e8d63d567bc7667925f886423f277fb3b
repoTags:
- docker.io/library/nginx:latest
size: "196117976"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-990825
size: "34114467"
- id: 0e9b4a0d1e86d942f5ed93eaf751771e7602104cac5e15256c36967770ad2775
repoDigests:
- registry.k8s.io/kube-proxy@sha256:51e1a0d7b1254f98246e4967add615b35d8c25d2bf71e3ff64f7fe7c27fb8d79
- registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "86773651"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-990825 image ls --format yaml --alsologtostderr:
I0327 19:10:17.370391  595679 out.go:291] Setting OutFile to fd 1 ...
I0327 19:10:17.370499  595679 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 19:10:17.370543  595679 out.go:304] Setting ErrFile to fd 2...
I0327 19:10:17.370553  595679 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 19:10:17.370802  595679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-562206/.minikube/bin
I0327 19:10:17.371422  595679 config.go:182] Loaded profile config "functional-990825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0327 19:10:17.371546  595679 config.go:182] Loaded profile config "functional-990825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0327 19:10:17.372051  595679 cli_runner.go:164] Run: docker container inspect functional-990825 --format={{.State.Status}}
I0327 19:10:17.391190  595679 ssh_runner.go:195] Run: systemctl --version
I0327 19:10:17.391258  595679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-990825
I0327 19:10:17.417504  595679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/functional-990825/id_rsa Username:docker}
I0327 19:10:17.506135  595679 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-990825 ssh pgrep buildkitd: exit status 1 (333.53838ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 image build -t localhost/my-image:functional-990825 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-990825 image build -t localhost/my-image:functional-990825 testdata/build --alsologtostderr: (2.174300211s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-990825 image build -t localhost/my-image:functional-990825 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> dc431e4633c
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-990825
--> 04a8f0c7c7c
Successfully tagged localhost/my-image:functional-990825
04a8f0c7c7c1693f6e3efe1a5b230fd470ec966578505633d0d2d57e6ef868b2
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-990825 image build -t localhost/my-image:functional-990825 testdata/build --alsologtostderr:
I0327 19:10:17.986470  595819 out.go:291] Setting OutFile to fd 1 ...
I0327 19:10:17.987425  595819 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 19:10:17.987448  595819 out.go:304] Setting ErrFile to fd 2...
I0327 19:10:17.987454  595819 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 19:10:17.987725  595819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-562206/.minikube/bin
I0327 19:10:17.988360  595819 config.go:182] Loaded profile config "functional-990825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0327 19:10:17.988891  595819 config.go:182] Loaded profile config "functional-990825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0327 19:10:17.989425  595819 cli_runner.go:164] Run: docker container inspect functional-990825 --format={{.State.Status}}
I0327 19:10:18.009312  595819 ssh_runner.go:195] Run: systemctl --version
I0327 19:10:18.009386  595819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-990825
I0327 19:10:18.037099  595819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/functional-990825/id_rsa Username:docker}
I0327 19:10:18.134430  595819 build_images.go:161] Building image from path: /tmp/build.2966324126.tar
I0327 19:10:18.134575  595819 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0327 19:10:18.144467  595819 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2966324126.tar
I0327 19:10:18.148046  595819 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2966324126.tar: stat -c "%s %y" /var/lib/minikube/build/build.2966324126.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2966324126.tar': No such file or directory
I0327 19:10:18.148077  595819 ssh_runner.go:362] scp /tmp/build.2966324126.tar --> /var/lib/minikube/build/build.2966324126.tar (3072 bytes)
I0327 19:10:18.177885  595819 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2966324126
I0327 19:10:18.187482  595819 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2966324126 -xf /var/lib/minikube/build/build.2966324126.tar
I0327 19:10:18.196721  595819 crio.go:315] Building image: /var/lib/minikube/build/build.2966324126
I0327 19:10:18.196821  595819 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-990825 /var/lib/minikube/build/build.2966324126 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0327 19:10:20.051845  595819 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-990825 /var/lib/minikube/build/build.2966324126 --cgroup-manager=cgroupfs: (1.854990529s)
I0327 19:10:20.051935  595819 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2966324126
I0327 19:10:20.061237  595819 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2966324126.tar
I0327 19:10:20.070194  595819 build_images.go:217] Built localhost/my-image:functional-990825 from /tmp/build.2966324126.tar
I0327 19:10:20.070269  595819 build_images.go:133] succeeded building to: functional-990825
I0327 19:10:20.070280  595819 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.480159097s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-990825
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 image load --daemon gcr.io/google-containers/addon-resizer:functional-990825 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-990825 image load --daemon gcr.io/google-containers/addon-resizer:functional-990825 --alsologtostderr: (5.705584492s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 image load --daemon gcr.io/google-containers/addon-resizer:functional-990825 --alsologtostderr
2024/03/27 19:10:05 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-990825 image load --daemon gcr.io/google-containers/addon-resizer:functional-990825 --alsologtostderr: (3.024158489s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.368675604s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-990825
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 image load --daemon gcr.io/google-containers/addon-resizer:functional-990825 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-990825 image load --daemon gcr.io/google-containers/addon-resizer:functional-990825 --alsologtostderr: (3.652765127s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 image save gcr.io/google-containers/addon-resizer:functional-990825 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 image rm gcr.io/google-containers/addon-resizer:functional-990825 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-990825
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-990825 image save --daemon gcr.io/google-containers/addon-resizer:functional-990825 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-990825
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.95s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-990825
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-990825
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-990825
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (162.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-738145 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0327 19:11:33.833164  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
E0327 19:12:01.517200  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-738145 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m41.448173942s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (162.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-738145 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-738145 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-738145 -- rollout status deployment/busybox: (3.861562903s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-738145 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-738145 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-738145 -- exec busybox-7fdf7869d9-7sgbt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-738145 -- exec busybox-7fdf7869d9-g6h8d -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-738145 -- exec busybox-7fdf7869d9-hjdcl -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-738145 -- exec busybox-7fdf7869d9-7sgbt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-738145 -- exec busybox-7fdf7869d9-g6h8d -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-738145 -- exec busybox-7fdf7869d9-hjdcl -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-738145 -- exec busybox-7fdf7869d9-7sgbt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-738145 -- exec busybox-7fdf7869d9-g6h8d -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-738145 -- exec busybox-7fdf7869d9-hjdcl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-738145 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-738145 -- exec busybox-7fdf7869d9-7sgbt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-738145 -- exec busybox-7fdf7869d9-7sgbt -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-738145 -- exec busybox-7fdf7869d9-g6h8d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-738145 -- exec busybox-7fdf7869d9-g6h8d -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-738145 -- exec busybox-7fdf7869d9-hjdcl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-738145 -- exec busybox-7fdf7869d9-hjdcl -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (51.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-738145 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-738145 -v=7 --alsologtostderr: (50.729503693s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (51.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-738145 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 cp testdata/cp-test.txt ha-738145:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 cp ha-738145:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3502835903/001/cp-test_ha-738145.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 cp ha-738145:/home/docker/cp-test.txt ha-738145-m02:/home/docker/cp-test_ha-738145_ha-738145-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145-m02 "sudo cat /home/docker/cp-test_ha-738145_ha-738145-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 cp ha-738145:/home/docker/cp-test.txt ha-738145-m03:/home/docker/cp-test_ha-738145_ha-738145-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145-m03 "sudo cat /home/docker/cp-test_ha-738145_ha-738145-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 cp ha-738145:/home/docker/cp-test.txt ha-738145-m04:/home/docker/cp-test_ha-738145_ha-738145-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145-m04 "sudo cat /home/docker/cp-test_ha-738145_ha-738145-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 cp testdata/cp-test.txt ha-738145-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 cp ha-738145-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3502835903/001/cp-test_ha-738145-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 cp ha-738145-m02:/home/docker/cp-test.txt ha-738145:/home/docker/cp-test_ha-738145-m02_ha-738145.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145 "sudo cat /home/docker/cp-test_ha-738145-m02_ha-738145.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 cp ha-738145-m02:/home/docker/cp-test.txt ha-738145-m03:/home/docker/cp-test_ha-738145-m02_ha-738145-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145-m03 "sudo cat /home/docker/cp-test_ha-738145-m02_ha-738145-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 cp ha-738145-m02:/home/docker/cp-test.txt ha-738145-m04:/home/docker/cp-test_ha-738145-m02_ha-738145-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145-m04 "sudo cat /home/docker/cp-test_ha-738145-m02_ha-738145-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 cp testdata/cp-test.txt ha-738145-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 cp ha-738145-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3502835903/001/cp-test_ha-738145-m03.txt
E0327 19:14:17.458073  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/functional-990825/client.crt: no such file or directory
E0327 19:14:17.463392  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/functional-990825/client.crt: no such file or directory
E0327 19:14:17.473883  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/functional-990825/client.crt: no such file or directory
E0327 19:14:17.494278  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/functional-990825/client.crt: no such file or directory
E0327 19:14:17.534519  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/functional-990825/client.crt: no such file or directory
E0327 19:14:17.614687  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/functional-990825/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145-m03 "sudo cat /home/docker/cp-test.txt"
E0327 19:14:17.775312  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/functional-990825/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 cp ha-738145-m03:/home/docker/cp-test.txt ha-738145:/home/docker/cp-test_ha-738145-m03_ha-738145.txt
E0327 19:14:18.095716  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/functional-990825/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145-m03 "sudo cat /home/docker/cp-test.txt"
E0327 19:14:18.738189  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/functional-990825/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145 "sudo cat /home/docker/cp-test_ha-738145-m03_ha-738145.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 cp ha-738145-m03:/home/docker/cp-test.txt ha-738145-m02:/home/docker/cp-test_ha-738145-m03_ha-738145-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145-m02 "sudo cat /home/docker/cp-test_ha-738145-m03_ha-738145-m02.txt"
E0327 19:14:20.018384  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/functional-990825/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 cp ha-738145-m03:/home/docker/cp-test.txt ha-738145-m04:/home/docker/cp-test_ha-738145-m03_ha-738145-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145-m04 "sudo cat /home/docker/cp-test_ha-738145-m03_ha-738145-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 cp testdata/cp-test.txt ha-738145-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 cp ha-738145-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3502835903/001/cp-test_ha-738145-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 cp ha-738145-m04:/home/docker/cp-test.txt ha-738145:/home/docker/cp-test_ha-738145-m04_ha-738145.txt
E0327 19:14:22.579309  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/functional-990825/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145 "sudo cat /home/docker/cp-test_ha-738145-m04_ha-738145.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 cp ha-738145-m04:/home/docker/cp-test.txt ha-738145-m02:/home/docker/cp-test_ha-738145-m04_ha-738145-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145-m02 "sudo cat /home/docker/cp-test_ha-738145-m04_ha-738145-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 cp ha-738145-m04:/home/docker/cp-test.txt ha-738145-m03:/home/docker/cp-test_ha-738145-m04_ha-738145-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 ssh -n ha-738145-m03 "sudo cat /home/docker/cp-test_ha-738145-m04_ha-738145-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 node stop m02 -v=7 --alsologtostderr
E0327 19:14:27.699600  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/functional-990825/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-738145 node stop m02 -v=7 --alsologtostderr: (12.048544481s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 status -v=7 --alsologtostderr
E0327 19:14:37.940645  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/functional-990825/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-738145 status -v=7 --alsologtostderr: exit status 7 (739.851095ms)

                                                
                                                
-- stdout --
	ha-738145
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-738145-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-738145-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-738145-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 19:14:37.818348  610719 out.go:291] Setting OutFile to fd 1 ...
	I0327 19:14:37.818558  610719 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:14:37.818586  610719 out.go:304] Setting ErrFile to fd 2...
	I0327 19:14:37.818605  610719 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:14:37.819015  610719 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-562206/.minikube/bin
	I0327 19:14:37.819241  610719 out.go:298] Setting JSON to false
	I0327 19:14:37.819314  610719 mustload.go:65] Loading cluster: ha-738145
	I0327 19:14:37.819365  610719 notify.go:220] Checking for updates...
	I0327 19:14:37.819831  610719 config.go:182] Loaded profile config "ha-738145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 19:14:37.819868  610719 status.go:255] checking status of ha-738145 ...
	I0327 19:14:37.820390  610719 cli_runner.go:164] Run: docker container inspect ha-738145 --format={{.State.Status}}
	I0327 19:14:37.844320  610719 status.go:330] ha-738145 host status = "Running" (err=<nil>)
	I0327 19:14:37.844343  610719 host.go:66] Checking if "ha-738145" exists ...
	I0327 19:14:37.844617  610719 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-738145
	I0327 19:14:37.865070  610719 host.go:66] Checking if "ha-738145" exists ...
	I0327 19:14:37.865376  610719 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0327 19:14:37.865426  610719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145
	I0327 19:14:37.882661  610719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/ha-738145/id_rsa Username:docker}
	I0327 19:14:37.975575  610719 ssh_runner.go:195] Run: systemctl --version
	I0327 19:14:37.980903  610719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 19:14:37.998436  610719 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 19:14:38.078405  610719 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:72 SystemTime:2024-03-27 19:14:38.06778326 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 19:14:38.079107  610719 kubeconfig.go:125] found "ha-738145" server: "https://192.168.49.254:8443"
	I0327 19:14:38.079134  610719 api_server.go:166] Checking apiserver status ...
	I0327 19:14:38.079184  610719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 19:14:38.091817  610719 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1424/cgroup
	I0327 19:14:38.102076  610719 api_server.go:182] apiserver freezer: "12:freezer:/docker/cac7717827e81ebff1eb8d7b8f9bcd5bba52cdc28d6814b6d143c2b945b6588f/crio/crio-7e00e385f2df4a11f3a3bb4d402e1b2a1a6c67dffcf44dbba5c6a2e1cbd098e8"
	I0327 19:14:38.102155  610719 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cac7717827e81ebff1eb8d7b8f9bcd5bba52cdc28d6814b6d143c2b945b6588f/crio/crio-7e00e385f2df4a11f3a3bb4d402e1b2a1a6c67dffcf44dbba5c6a2e1cbd098e8/freezer.state
	I0327 19:14:38.111351  610719 api_server.go:204] freezer state: "THAWED"
	I0327 19:14:38.111377  610719 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0327 19:14:38.119704  610719 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0327 19:14:38.119734  610719 status.go:422] ha-738145 apiserver status = Running (err=<nil>)
	I0327 19:14:38.119746  610719 status.go:257] ha-738145 status: &{Name:ha-738145 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0327 19:14:38.119763  610719 status.go:255] checking status of ha-738145-m02 ...
	I0327 19:14:38.120081  610719 cli_runner.go:164] Run: docker container inspect ha-738145-m02 --format={{.State.Status}}
	I0327 19:14:38.138764  610719 status.go:330] ha-738145-m02 host status = "Stopped" (err=<nil>)
	I0327 19:14:38.138788  610719 status.go:343] host is not running, skipping remaining checks
	I0327 19:14:38.138796  610719 status.go:257] ha-738145-m02 status: &{Name:ha-738145-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0327 19:14:38.138838  610719 status.go:255] checking status of ha-738145-m03 ...
	I0327 19:14:38.139151  610719 cli_runner.go:164] Run: docker container inspect ha-738145-m03 --format={{.State.Status}}
	I0327 19:14:38.156163  610719 status.go:330] ha-738145-m03 host status = "Running" (err=<nil>)
	I0327 19:14:38.156187  610719 host.go:66] Checking if "ha-738145-m03" exists ...
	I0327 19:14:38.156583  610719 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-738145-m03
	I0327 19:14:38.174658  610719 host.go:66] Checking if "ha-738145-m03" exists ...
	I0327 19:14:38.174982  610719 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0327 19:14:38.175028  610719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145-m03
	I0327 19:14:38.195120  610719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33543 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/ha-738145-m03/id_rsa Username:docker}
	I0327 19:14:38.289075  610719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 19:14:38.301447  610719 kubeconfig.go:125] found "ha-738145" server: "https://192.168.49.254:8443"
	I0327 19:14:38.301478  610719 api_server.go:166] Checking apiserver status ...
	I0327 19:14:38.301525  610719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 19:14:38.313350  610719 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1339/cgroup
	I0327 19:14:38.323102  610719 api_server.go:182] apiserver freezer: "12:freezer:/docker/b6f70dcb851381639be7dd197f5bd63000b81f2e5174aaa82e5c3e038854102b/crio/crio-4f9aca064af7d95bd78a00ebbbc33f7bf5cc5efd798f4dec266f70a2e75ce69e"
	I0327 19:14:38.323173  610719 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b6f70dcb851381639be7dd197f5bd63000b81f2e5174aaa82e5c3e038854102b/crio/crio-4f9aca064af7d95bd78a00ebbbc33f7bf5cc5efd798f4dec266f70a2e75ce69e/freezer.state
	I0327 19:14:38.332509  610719 api_server.go:204] freezer state: "THAWED"
	I0327 19:14:38.332539  610719 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0327 19:14:38.340251  610719 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0327 19:14:38.340281  610719 status.go:422] ha-738145-m03 apiserver status = Running (err=<nil>)
	I0327 19:14:38.340292  610719 status.go:257] ha-738145-m03 status: &{Name:ha-738145-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0327 19:14:38.340314  610719 status.go:255] checking status of ha-738145-m04 ...
	I0327 19:14:38.340607  610719 cli_runner.go:164] Run: docker container inspect ha-738145-m04 --format={{.State.Status}}
	I0327 19:14:38.356588  610719 status.go:330] ha-738145-m04 host status = "Running" (err=<nil>)
	I0327 19:14:38.356613  610719 host.go:66] Checking if "ha-738145-m04" exists ...
	I0327 19:14:38.356903  610719 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-738145-m04
	I0327 19:14:38.373706  610719 host.go:66] Checking if "ha-738145-m04" exists ...
	I0327 19:14:38.374075  610719 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0327 19:14:38.374134  610719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-738145-m04
	I0327 19:14:38.389138  610719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33548 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/ha-738145-m04/id_rsa Username:docker}
	I0327 19:14:38.475314  610719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 19:14:38.487893  610719 status.go:257] ha-738145-m04 status: &{Name:ha-738145-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (23.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 node start m02 -v=7 --alsologtostderr
E0327 19:14:58.421838  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/functional-990825/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-738145 node start m02 -v=7 --alsologtostderr: (21.958504605s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-738145 status -v=7 --alsologtostderr: (1.340765243s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (23.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (5.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (5.970702384s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (5.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (171.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-738145 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-738145 -v=7 --alsologtostderr
E0327 19:15:39.382124  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/functional-990825/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-738145 -v=7 --alsologtostderr: (36.906674692s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-738145 --wait=true -v=7 --alsologtostderr
E0327 19:16:33.833678  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
E0327 19:17:01.302514  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/functional-990825/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-738145 --wait=true -v=7 --alsologtostderr: (2m14.456426342s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-738145
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (171.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-738145 node delete m03 -v=7 --alsologtostderr: (11.978542647s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-738145 stop -v=7 --alsologtostderr: (35.618653911s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-738145 status -v=7 --alsologtostderr: exit status 7 (103.508684ms)

                                                
                                                
-- stdout --
	ha-738145
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-738145-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-738145-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 19:18:49.276162  624171 out.go:291] Setting OutFile to fd 1 ...
	I0327 19:18:49.276292  624171 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:18:49.276303  624171 out.go:304] Setting ErrFile to fd 2...
	I0327 19:18:49.276308  624171 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:18:49.276562  624171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-562206/.minikube/bin
	I0327 19:18:49.276744  624171 out.go:298] Setting JSON to false
	I0327 19:18:49.276776  624171 mustload.go:65] Loading cluster: ha-738145
	I0327 19:18:49.276897  624171 notify.go:220] Checking for updates...
	I0327 19:18:49.277194  624171 config.go:182] Loaded profile config "ha-738145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 19:18:49.277206  624171 status.go:255] checking status of ha-738145 ...
	I0327 19:18:49.277701  624171 cli_runner.go:164] Run: docker container inspect ha-738145 --format={{.State.Status}}
	I0327 19:18:49.295143  624171 status.go:330] ha-738145 host status = "Stopped" (err=<nil>)
	I0327 19:18:49.295168  624171 status.go:343] host is not running, skipping remaining checks
	I0327 19:18:49.295176  624171 status.go:257] ha-738145 status: &{Name:ha-738145 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0327 19:18:49.295198  624171 status.go:255] checking status of ha-738145-m02 ...
	I0327 19:18:49.295512  624171 cli_runner.go:164] Run: docker container inspect ha-738145-m02 --format={{.State.Status}}
	I0327 19:18:49.310780  624171 status.go:330] ha-738145-m02 host status = "Stopped" (err=<nil>)
	I0327 19:18:49.310806  624171 status.go:343] host is not running, skipping remaining checks
	I0327 19:18:49.310813  624171 status.go:257] ha-738145-m02 status: &{Name:ha-738145-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0327 19:18:49.310837  624171 status.go:255] checking status of ha-738145-m04 ...
	I0327 19:18:49.311158  624171 cli_runner.go:164] Run: docker container inspect ha-738145-m04 --format={{.State.Status}}
	I0327 19:18:49.325727  624171 status.go:330] ha-738145-m04 host status = "Stopped" (err=<nil>)
	I0327 19:18:49.325751  624171 status.go:343] host is not running, skipping remaining checks
	I0327 19:18:49.325759  624171 status.go:257] ha-738145-m04 status: &{Name:ha-738145-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (60.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-738145 --control-plane -v=7 --alsologtostderr
E0327 19:21:33.833320  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-738145 --control-plane -v=7 --alsologtostderr: (59.143647072s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-738145 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (60.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.77s)

                                                
                                    
x
+
TestJSONOutput/start/Command (76.5s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-880621 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0327 19:22:56.878130  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-880621 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m16.499936395s)
--- PASS: TestJSONOutput/start/Command (76.50s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-880621 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-880621 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.91s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-880621 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-880621 --output=json --user=testUser: (5.911238551s)
--- PASS: TestJSONOutput/stop/Command (5.91s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-894563 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-894563 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (88.823356ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4257a03c-3098-4264-b64a-009f2cb4c658","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-894563] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d300eb04-bbe0-459d-bcb4-b477a5fed251","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18517"}}
	{"specversion":"1.0","id":"6b0556d2-1469-4192-a088-962776e7875f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"38ed54f9-5f84-405f-a6bc-63fcfb37f0fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18517-562206/kubeconfig"}}
	{"specversion":"1.0","id":"23b1b10b-389a-4c5d-8636-8ce4c1b4944b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18517-562206/.minikube"}}
	{"specversion":"1.0","id":"e2372809-9daa-4b2a-a6ee-3371ccf30f68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"bf23cd14-f144-46b8-ba20-a6dfd2974e61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"36870060-d211-4dc7-860a-1270ba93bc2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-894563" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-894563
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (47.41s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-876276 --network=
E0327 19:24:17.458218  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/functional-990825/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-876276 --network=: (45.296578083s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-876276" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-876276
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-876276: (2.085887335s)
--- PASS: TestKicCustomNetwork/create_custom_network (47.41s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.19s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-696363 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-696363 --network=bridge: (34.115755737s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-696363" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-696363
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-696363: (2.04448393s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.19s)

                                                
                                    
x
+
TestKicExistingNetwork (36.14s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-998628 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-998628 --network=existing-network: (34.022717142s)
helpers_test.go:175: Cleaning up "existing-network-998628" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-998628
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-998628: (1.981183968s)
--- PASS: TestKicExistingNetwork (36.14s)

                                                
                                    
x
+
TestKicCustomSubnet (35.48s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-958112 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-958112 --subnet=192.168.60.0/24: (33.351907669s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-958112 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-958112" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-958112
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-958112: (2.105772417s)
--- PASS: TestKicCustomSubnet (35.48s)

                                                
                                    
x
+
TestKicStaticIP (35.79s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-522455 --static-ip=192.168.200.200
E0327 19:26:33.832741  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-522455 --static-ip=192.168.200.200: (33.51959516s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-522455 ip
helpers_test.go:175: Cleaning up "static-ip-522455" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-522455
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-522455: (2.121571619s)
--- PASS: TestKicStaticIP (35.79s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (67.99s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-542539 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-542539 --driver=docker  --container-runtime=crio: (30.123565211s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-545427 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-545427 --driver=docker  --container-runtime=crio: (32.715892829s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-542539
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-545427
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-545427" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-545427
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-545427: (1.930680978s)
helpers_test.go:175: Cleaning up "first-542539" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-542539
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-542539: (1.94830796s)
--- PASS: TestMinikubeProfile (67.99s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.58s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-916237 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-916237 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.584537204s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-916237 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-929923 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-929923 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.058261375s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-929923 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-916237 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-916237 --alsologtostderr -v=5: (1.584736517s)
--- PASS: TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-929923 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-929923
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-929923: (1.202149471s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.19s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-929923
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-929923: (7.187581863s)
--- PASS: TestMountStart/serial/RestartStopped (8.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-929923 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (128.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-175436 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0327 19:29:17.458173  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/functional-990825/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-175436 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m8.35552123s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (128.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-175436 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-175436 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-175436 -- rollout status deployment/busybox: (3.122145113s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-175436 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-175436 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-175436 -- exec busybox-7fdf7869d9-n78fs -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-175436 -- exec busybox-7fdf7869d9-r7f57 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-175436 -- exec busybox-7fdf7869d9-n78fs -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-175436 -- exec busybox-7fdf7869d9-r7f57 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-175436 -- exec busybox-7fdf7869d9-n78fs -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-175436 -- exec busybox-7fdf7869d9-r7f57 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.09s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-175436 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-175436 -- exec busybox-7fdf7869d9-n78fs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-175436 -- exec busybox-7fdf7869d9-n78fs -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-175436 -- exec busybox-7fdf7869d9-r7f57 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-175436 -- exec busybox-7fdf7869d9-r7f57 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.06s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-175436 -v 3 --alsologtostderr
E0327 19:30:40.503743  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/functional-990825/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-175436 -v 3 --alsologtostderr: (15.906544248s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.55s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-175436 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 cp testdata/cp-test.txt multinode-175436:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 ssh -n multinode-175436 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 cp multinode-175436:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile825293798/001/cp-test_multinode-175436.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 ssh -n multinode-175436 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 cp multinode-175436:/home/docker/cp-test.txt multinode-175436-m02:/home/docker/cp-test_multinode-175436_multinode-175436-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 ssh -n multinode-175436 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 ssh -n multinode-175436-m02 "sudo cat /home/docker/cp-test_multinode-175436_multinode-175436-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 cp multinode-175436:/home/docker/cp-test.txt multinode-175436-m03:/home/docker/cp-test_multinode-175436_multinode-175436-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 ssh -n multinode-175436 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 ssh -n multinode-175436-m03 "sudo cat /home/docker/cp-test_multinode-175436_multinode-175436-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 cp testdata/cp-test.txt multinode-175436-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 ssh -n multinode-175436-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 cp multinode-175436-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile825293798/001/cp-test_multinode-175436-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 ssh -n multinode-175436-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 cp multinode-175436-m02:/home/docker/cp-test.txt multinode-175436:/home/docker/cp-test_multinode-175436-m02_multinode-175436.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 ssh -n multinode-175436-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 ssh -n multinode-175436 "sudo cat /home/docker/cp-test_multinode-175436-m02_multinode-175436.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 cp multinode-175436-m02:/home/docker/cp-test.txt multinode-175436-m03:/home/docker/cp-test_multinode-175436-m02_multinode-175436-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 ssh -n multinode-175436-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 ssh -n multinode-175436-m03 "sudo cat /home/docker/cp-test_multinode-175436-m02_multinode-175436-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 cp testdata/cp-test.txt multinode-175436-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 ssh -n multinode-175436-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 cp multinode-175436-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile825293798/001/cp-test_multinode-175436-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 ssh -n multinode-175436-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 cp multinode-175436-m03:/home/docker/cp-test.txt multinode-175436:/home/docker/cp-test_multinode-175436-m03_multinode-175436.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 ssh -n multinode-175436-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 ssh -n multinode-175436 "sudo cat /home/docker/cp-test_multinode-175436-m03_multinode-175436.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 cp multinode-175436-m03:/home/docker/cp-test.txt multinode-175436-m02:/home/docker/cp-test_multinode-175436-m03_multinode-175436-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 ssh -n multinode-175436-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 ssh -n multinode-175436-m02 "sudo cat /home/docker/cp-test_multinode-175436-m03_multinode-175436-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-175436 node stop m03: (1.232894647s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-175436 status: exit status 7 (511.630023ms)

                                                
                                                
-- stdout --
	multinode-175436
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-175436-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-175436-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-175436 status --alsologtostderr: exit status 7 (532.349901ms)

                                                
                                                
-- stdout --
	multinode-175436
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-175436-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-175436-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 19:31:05.742833  677077 out.go:291] Setting OutFile to fd 1 ...
	I0327 19:31:05.743006  677077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:31:05.743037  677077 out.go:304] Setting ErrFile to fd 2...
	I0327 19:31:05.743060  677077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:31:05.743318  677077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-562206/.minikube/bin
	I0327 19:31:05.743528  677077 out.go:298] Setting JSON to false
	I0327 19:31:05.743591  677077 mustload.go:65] Loading cluster: multinode-175436
	I0327 19:31:05.743666  677077 notify.go:220] Checking for updates...
	I0327 19:31:05.744058  677077 config.go:182] Loaded profile config "multinode-175436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 19:31:05.744091  677077 status.go:255] checking status of multinode-175436 ...
	I0327 19:31:05.744748  677077 cli_runner.go:164] Run: docker container inspect multinode-175436 --format={{.State.Status}}
	I0327 19:31:05.766904  677077 status.go:330] multinode-175436 host status = "Running" (err=<nil>)
	I0327 19:31:05.766930  677077 host.go:66] Checking if "multinode-175436" exists ...
	I0327 19:31:05.767366  677077 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-175436
	I0327 19:31:05.787079  677077 host.go:66] Checking if "multinode-175436" exists ...
	I0327 19:31:05.787533  677077 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0327 19:31:05.787580  677077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-175436
	I0327 19:31:05.811472  677077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33653 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/multinode-175436/id_rsa Username:docker}
	I0327 19:31:05.903221  677077 ssh_runner.go:195] Run: systemctl --version
	I0327 19:31:05.909267  677077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 19:31:05.924062  677077 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 19:31:05.986336  677077 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-03-27 19:31:05.976211254 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 19:31:05.986947  677077 kubeconfig.go:125] found "multinode-175436" server: "https://192.168.67.2:8443"
	I0327 19:31:05.986975  677077 api_server.go:166] Checking apiserver status ...
	I0327 19:31:05.987031  677077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 19:31:05.998292  677077 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1420/cgroup
	I0327 19:31:06.010176  677077 api_server.go:182] apiserver freezer: "12:freezer:/docker/ac863444c3ac2e47f2cf606bfb754cf139ef2f6d8d5fe7153d477491c4ee881a/crio/crio-415447a003a3947b2bb2b8a013809c5ba90f8f1e36c37b2bdf8f52dce6cf24a0"
	I0327 19:31:06.010269  677077 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ac863444c3ac2e47f2cf606bfb754cf139ef2f6d8d5fe7153d477491c4ee881a/crio/crio-415447a003a3947b2bb2b8a013809c5ba90f8f1e36c37b2bdf8f52dce6cf24a0/freezer.state
	I0327 19:31:06.021994  677077 api_server.go:204] freezer state: "THAWED"
	I0327 19:31:06.022038  677077 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0327 19:31:06.030390  677077 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0327 19:31:06.030422  677077 status.go:422] multinode-175436 apiserver status = Running (err=<nil>)
	I0327 19:31:06.030436  677077 status.go:257] multinode-175436 status: &{Name:multinode-175436 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0327 19:31:06.030454  677077 status.go:255] checking status of multinode-175436-m02 ...
	I0327 19:31:06.030787  677077 cli_runner.go:164] Run: docker container inspect multinode-175436-m02 --format={{.State.Status}}
	I0327 19:31:06.049305  677077 status.go:330] multinode-175436-m02 host status = "Running" (err=<nil>)
	I0327 19:31:06.049347  677077 host.go:66] Checking if "multinode-175436-m02" exists ...
	I0327 19:31:06.049653  677077 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-175436-m02
	I0327 19:31:06.066577  677077 host.go:66] Checking if "multinode-175436-m02" exists ...
	I0327 19:31:06.066902  677077 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0327 19:31:06.066962  677077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-175436-m02
	I0327 19:31:06.083649  677077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33658 SSHKeyPath:/home/jenkins/minikube-integration/18517-562206/.minikube/machines/multinode-175436-m02/id_rsa Username:docker}
	I0327 19:31:06.171981  677077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 19:31:06.183829  677077 status.go:257] multinode-175436-m02 status: &{Name:multinode-175436-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0327 19:31:06.183867  677077 status.go:255] checking status of multinode-175436-m03 ...
	I0327 19:31:06.184166  677077 cli_runner.go:164] Run: docker container inspect multinode-175436-m03 --format={{.State.Status}}
	I0327 19:31:06.202666  677077 status.go:330] multinode-175436-m03 host status = "Stopped" (err=<nil>)
	I0327 19:31:06.202688  677077 status.go:343] host is not running, skipping remaining checks
	I0327 19:31:06.202696  677077 status.go:257] multinode-175436-m03 status: &{Name:multinode-175436-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-175436 node start m03 -v=7 --alsologtostderr: (9.367478767s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (107.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-175436
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-175436
E0327 19:31:33.832776  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-175436: (24.829280961s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-175436 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-175436 --wait=true -v=8 --alsologtostderr: (1m22.508376746s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-175436
--- PASS: TestMultiNode/serial/RestartKeepsNodes (107.49s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-175436 node delete m03: (5.004432269s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.69s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-175436 stop: (23.635311005s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-175436 status: exit status 7 (100.190488ms)

                                                
                                                
-- stdout --
	multinode-175436
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-175436-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-175436 status --alsologtostderr: exit status 7 (88.253689ms)

                                                
                                                
-- stdout --
	multinode-175436
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-175436-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 19:33:33.271517  684644 out.go:291] Setting OutFile to fd 1 ...
	I0327 19:33:33.271652  684644 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:33:33.271662  684644 out.go:304] Setting ErrFile to fd 2...
	I0327 19:33:33.271667  684644 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:33:33.271912  684644 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-562206/.minikube/bin
	I0327 19:33:33.272084  684644 out.go:298] Setting JSON to false
	I0327 19:33:33.272117  684644 mustload.go:65] Loading cluster: multinode-175436
	I0327 19:33:33.272244  684644 notify.go:220] Checking for updates...
	I0327 19:33:33.272522  684644 config.go:182] Loaded profile config "multinode-175436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0327 19:33:33.272533  684644 status.go:255] checking status of multinode-175436 ...
	I0327 19:33:33.273314  684644 cli_runner.go:164] Run: docker container inspect multinode-175436 --format={{.State.Status}}
	I0327 19:33:33.289500  684644 status.go:330] multinode-175436 host status = "Stopped" (err=<nil>)
	I0327 19:33:33.289525  684644 status.go:343] host is not running, skipping remaining checks
	I0327 19:33:33.289533  684644 status.go:257] multinode-175436 status: &{Name:multinode-175436 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0327 19:33:33.289560  684644 status.go:255] checking status of multinode-175436-m02 ...
	I0327 19:33:33.289866  684644 cli_runner.go:164] Run: docker container inspect multinode-175436-m02 --format={{.State.Status}}
	I0327 19:33:33.304449  684644 status.go:330] multinode-175436-m02 host status = "Stopped" (err=<nil>)
	I0327 19:33:33.304473  684644 status.go:343] host is not running, skipping remaining checks
	I0327 19:33:33.304481  684644 status.go:257] multinode-175436-m02 status: &{Name:multinode-175436-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (57.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-175436 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0327 19:34:17.457574  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/functional-990825/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-175436 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (56.314800389s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175436 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (57.05s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-175436
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-175436-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-175436-m02 --driver=docker  --container-runtime=crio: exit status 14 (98.086646ms)

                                                
                                                
-- stdout --
	* [multinode-175436-m02] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18517-562206/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18517-562206/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-175436-m02' is duplicated with machine name 'multinode-175436-m02' in profile 'multinode-175436'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-175436-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-175436-m03 --driver=docker  --container-runtime=crio: (35.540739083s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-175436
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-175436: exit status 80 (335.829544ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-175436 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-175436-m03 already exists in multinode-175436-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-175436-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-175436-m03: (1.903253877s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.94s)

                                                
                                    
x
+
TestPreload (123.97s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-277132 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0327 19:36:33.833647  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-277132 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m30.489967379s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-277132 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-277132 image pull gcr.io/k8s-minikube/busybox: (1.901199565s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-277132
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-277132: (5.811885167s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-277132 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-277132 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (22.901388278s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-277132 image list
helpers_test.go:175: Cleaning up "test-preload-277132" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-277132
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-277132: (2.534399342s)
--- PASS: TestPreload (123.97s)

                                                
                                    
x
+
TestScheduledStopUnix (104.84s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-241049 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-241049 --memory=2048 --driver=docker  --container-runtime=crio: (28.211933674s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-241049 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-241049 -n scheduled-stop-241049
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-241049 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-241049 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-241049 -n scheduled-stop-241049
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-241049
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-241049 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-241049
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-241049: exit status 7 (72.598614ms)

                                                
                                                
-- stdout --
	scheduled-stop-241049
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-241049 -n scheduled-stop-241049
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-241049 -n scheduled-stop-241049: exit status 7 (79.516218ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-241049" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-241049
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-241049: (4.935766555s)
--- PASS: TestScheduledStopUnix (104.84s)

                                                
                                    
x
+
TestInsufficientStorage (10.31s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-564300 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-564300 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.838652673s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"187ea631-3142-46b5-a72f-c9fe2dcb78e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-564300] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"439ca297-76b8-4d84-a7e7-94b48f0c7505","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18517"}}
	{"specversion":"1.0","id":"02e5f3b8-bb3e-41e4-91c6-7993993d7ee9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"22f004cf-939b-4f5b-8035-177bd9ffa5ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18517-562206/kubeconfig"}}
	{"specversion":"1.0","id":"294e3c6f-886b-47d2-be63-005dc195e309","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18517-562206/.minikube"}}
	{"specversion":"1.0","id":"fc291b4e-9695-4df1-bb8c-a49909c1c2c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"8fbb1dfb-64e2-48ec-be32-7e903673889a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c36eafcc-c468-442f-82f6-a9f1a0cd84ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"69cc4d8c-a700-45d4-a0ac-d77d24641986","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"3381cdc6-18cb-4413-89ae-4b18b78ea16f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0c38b08e-d038-4a7e-b90d-94a7ee748341","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"1e2c35b2-f6a8-4ef7-b2d0-5039c82fcb33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-564300\" primary control-plane node in \"insufficient-storage-564300\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7d6fac6a-b2f1-4b75-9726-2629f447160d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.43-beta.0 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e8a8d3b8-b5cf-45a8-b64d-c68bbba16313","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"4f6d1eae-6d38-4cff-98a2-6fd7b1d5995d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-564300 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-564300 --output=json --layout=cluster: exit status 7 (289.323149ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-564300","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-564300","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0327 19:39:09.228945  701229 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-564300" does not appear in /home/jenkins/minikube-integration/18517-562206/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-564300 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-564300 --output=json --layout=cluster: exit status 7 (289.200269ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-564300","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-564300","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0327 19:39:09.518537  701285 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-564300" does not appear in /home/jenkins/minikube-integration/18517-562206/kubeconfig
	E0327 19:39:09.528394  701285 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/insufficient-storage-564300/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-564300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-564300
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-564300: (1.886811238s)
--- PASS: TestInsufficientStorage (10.31s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (80.27s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.991914247 start -p running-upgrade-313970 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.991914247 start -p running-upgrade-313970 --memory=2200 --vm-driver=docker  --container-runtime=crio: (34.375592674s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-313970 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-313970 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.829009553s)
helpers_test.go:175: Cleaning up "running-upgrade-313970" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-313970
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-313970: (2.705339927s)
--- PASS: TestRunningBinaryUpgrade (80.27s)

                                                
                                    
x
+
TestKubernetesUpgrade (381.7s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-595176 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0327 19:41:33.833676  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-595176 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m8.361049699s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-595176
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-595176: (1.258856294s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-595176 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-595176 status --format={{.Host}}: exit status 7 (79.591595ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-595176 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-595176 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m44.009849646s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-595176 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-595176 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-595176 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (191.422248ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-595176] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18517-562206/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18517-562206/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-595176
	    minikube start -p kubernetes-upgrade-595176 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5951762 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-595176 --kubernetes-version=v1.30.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-595176 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-595176 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.224117957s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-595176" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-595176
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-595176: (2.413934545s)
--- PASS: TestKubernetesUpgrade (381.70s)

                                                
                                    
x
+
TestMissingContainerUpgrade (150.98s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1628364195 start -p missing-upgrade-354263 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1628364195 start -p missing-upgrade-354263 --memory=2200 --driver=docker  --container-runtime=crio: (1m17.796916818s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-354263
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-354263: (10.495492883s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-354263
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-354263 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-354263 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (59.28215625s)
helpers_test.go:175: Cleaning up "missing-upgrade-354263" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-354263
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-354263: (1.968325734s)
--- PASS: TestMissingContainerUpgrade (150.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-236124 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-236124 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (124.005813ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-236124] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18517-562206/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18517-562206/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestPause/serial/Start (58.85s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-235148 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-235148 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (58.852442688s)
--- PASS: TestPause/serial/Start (58.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-236124 --driver=docker  --container-runtime=crio
E0327 19:39:17.458583  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/functional-990825/client.crt: no such file or directory
E0327 19:39:36.878625  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-236124 --driver=docker  --container-runtime=crio: (41.93649058s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-236124 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-236124 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-236124 --no-kubernetes --driver=docker  --container-runtime=crio: (4.168540879s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-236124 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-236124 status -o json: exit status 2 (315.03286ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-236124","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-236124
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-236124: (2.265790769s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-236124 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-236124 --no-kubernetes --driver=docker  --container-runtime=crio: (6.443503517s)
--- PASS: TestNoKubernetes/serial/Start (6.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-236124 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-236124 "sudo systemctl is-active --quiet service kubelet": exit status 1 (273.579947ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-236124
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-236124: (1.213926258s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-236124 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-236124 --driver=docker  --container-runtime=crio: (6.818593617s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.82s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (28.03s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-235148 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-235148 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.011041072s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (28.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-236124 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-236124 "sudo systemctl is-active --quiet service kubelet": exit status 1 (259.682487ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestPause/serial/Pause (1.02s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-235148 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-235148 --alsologtostderr -v=5: (1.017599612s)
--- PASS: TestPause/serial/Pause (1.02s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-235148 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-235148 --output=json --layout=cluster: exit status 2 (421.765264ms)

                                                
                                                
-- stdout --
	{"Name":"pause-235148","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-235148","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.42s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.88s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-235148 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.88s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.4s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-235148 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-235148 --alsologtostderr -v=5: (1.404451582s)
--- PASS: TestPause/serial/PauseAgain (1.40s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (4.16s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-235148 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-235148 --alsologtostderr -v=5: (4.164412755s)
--- PASS: TestPause/serial/DeletePaused (4.16s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.16s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-235148
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-235148: exit status 1 (18.99027ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-235148: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (83.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1251932221 start -p stopped-upgrade-984460 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1251932221 start -p stopped-upgrade-984460 --memory=2200 --vm-driver=docker  --container-runtime=crio: (35.194121299s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1251932221 -p stopped-upgrade-984460 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1251932221 -p stopped-upgrade-984460 stop: (2.540272829s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-984460 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-984460 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (46.016063463s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (83.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-984460
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-984460: (1.241834626s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-141608 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-141608 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (266.697223ms)

                                                
                                                
-- stdout --
	* [false-141608] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18517
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18517-562206/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18517-562206/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 19:46:21.736023  738305 out.go:291] Setting OutFile to fd 1 ...
	I0327 19:46:21.736211  738305 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:46:21.736237  738305 out.go:304] Setting ErrFile to fd 2...
	I0327 19:46:21.736254  738305 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 19:46:21.736524  738305 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18517-562206/.minikube/bin
	I0327 19:46:21.736989  738305 out.go:298] Setting JSON to false
	I0327 19:46:21.738014  738305 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12520,"bootTime":1711556262,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0327 19:46:21.738078  738305 start.go:139] virtualization:  
	I0327 19:46:21.741764  738305 out.go:177] * [false-141608] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0327 19:46:21.743647  738305 out.go:177]   - MINIKUBE_LOCATION=18517
	I0327 19:46:21.743710  738305 notify.go:220] Checking for updates...
	I0327 19:46:21.749479  738305 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 19:46:21.751586  738305 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18517-562206/kubeconfig
	I0327 19:46:21.753985  738305 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18517-562206/.minikube
	I0327 19:46:21.756407  738305 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0327 19:46:21.758405  738305 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 19:46:21.760616  738305 config.go:182] Loaded profile config "kubernetes-upgrade-595176": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0327 19:46:21.760720  738305 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 19:46:21.788377  738305 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0327 19:46:21.788506  738305 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0327 19:46:21.910103  738305 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-03-27 19:46:21.897288355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0327 19:46:21.910313  738305 docker.go:295] overlay module found
	I0327 19:46:21.912527  738305 out.go:177] * Using the docker driver based on user configuration
	I0327 19:46:21.914943  738305 start.go:297] selected driver: docker
	I0327 19:46:21.914967  738305 start.go:901] validating driver "docker" against <nil>
	I0327 19:46:21.914984  738305 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 19:46:21.918951  738305 out.go:177] 
	W0327 19:46:21.921078  738305 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0327 19:46:21.922876  738305 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-141608 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-141608

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-141608

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-141608

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-141608

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-141608

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-141608

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-141608

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-141608

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-141608

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-141608

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-141608

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-141608" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-141608" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18517-562206/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 27 Mar 2024 19:42:10 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-595176
contexts:
- context:
cluster: kubernetes-upgrade-595176
user: kubernetes-upgrade-595176
name: kubernetes-upgrade-595176
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-595176
user:
client-certificate: /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/kubernetes-upgrade-595176/client.crt
client-key: /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/kubernetes-upgrade-595176/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-141608

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-141608"

                                                
                                                
----------------------- debugLogs end: false-141608 [took: 4.502063611s] --------------------------------
helpers_test.go:175: Cleaning up "false-141608" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-141608
--- PASS: TestNetworkPlugins/group/false (4.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (160.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-553313 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-553313 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m40.890531255s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (160.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-553313 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [75cd07d1-e7c3-40ae-a0fc-0cf276695f74] Pending
helpers_test.go:344: "busybox" [75cd07d1-e7c3-40ae-a0fc-0cf276695f74] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [75cd07d1-e7c3-40ae-a0fc-0cf276695f74] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003187261s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-553313 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-553313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-553313 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-553313 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-553313 --alsologtostderr -v=3: (12.024959772s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-553313 -n old-k8s-version-553313
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-553313 -n old-k8s-version-553313: exit status 7 (75.81265ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-553313 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (52.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-553313 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-553313 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (51.55402844s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-553313 -n old-k8s-version-553313
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (52.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-349282 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.3
E0327 19:51:33.833315  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-349282 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.3: (1m22.107510627s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (18.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-c6w7q" [e63c7a0e-9592-43e8-8e3b-6e3aaf2cf4ef] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-cd95d586-c6w7q" [e63c7a0e-9592-43e8-8e3b-6e3aaf2cf4ef] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 18.004241127s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (18.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-c6w7q" [e63c7a0e-9592-43e8-8e3b-6e3aaf2cf4ef] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003599867s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-553313 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-553313 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-553313 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-553313 -n old-k8s-version-553313
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-553313 -n old-k8s-version-553313: exit status 2 (365.802245ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-553313 -n old-k8s-version-553313
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-553313 -n old-k8s-version-553313: exit status 2 (367.92991ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-553313 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-553313 -n old-k8s-version-553313
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-553313 -n old-k8s-version-553313
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (77.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-100917 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-100917 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.3: (1m17.708092965s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (77.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-349282 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fb7220ba-af00-4231-a1f3-f59aa7f807f6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fb7220ba-af00-4231-a1f3-f59aa7f807f6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003806422s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-349282 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-349282 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-349282 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.606972305s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-349282 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-349282 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-349282 --alsologtostderr -v=3: (12.046811142s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-349282 -n default-k8s-diff-port-349282
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-349282 -n default-k8s-diff-port-349282: exit status 7 (76.141659ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-349282 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (299.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-349282 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-349282 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.3: (4m58.627522842s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-349282 -n default-k8s-diff-port-349282
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (299.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-100917 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4faec7e4-b6ff-4b55-a8ab-02416c9e6401] Pending
helpers_test.go:344: "busybox" [4faec7e4-b6ff-4b55-a8ab-02416c9e6401] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4faec7e4-b6ff-4b55-a8ab-02416c9e6401] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.010896465s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-100917 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-100917 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-100917 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.249468588s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-100917 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-100917 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-100917 --alsologtostderr -v=3: (12.559102696s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-100917 -n embed-certs-100917
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-100917 -n embed-certs-100917: exit status 7 (79.282401ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-100917 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (289.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-100917 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.3
E0327 19:54:17.457718  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/functional-990825/client.crt: no such file or directory
E0327 19:55:35.157686  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/old-k8s-version-553313/client.crt: no such file or directory
E0327 19:55:35.162979  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/old-k8s-version-553313/client.crt: no such file or directory
E0327 19:55:35.173284  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/old-k8s-version-553313/client.crt: no such file or directory
E0327 19:55:35.193561  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/old-k8s-version-553313/client.crt: no such file or directory
E0327 19:55:35.233890  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/old-k8s-version-553313/client.crt: no such file or directory
E0327 19:55:35.314195  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/old-k8s-version-553313/client.crt: no such file or directory
E0327 19:55:35.474658  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/old-k8s-version-553313/client.crt: no such file or directory
E0327 19:55:35.795347  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/old-k8s-version-553313/client.crt: no such file or directory
E0327 19:55:36.436156  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/old-k8s-version-553313/client.crt: no such file or directory
E0327 19:55:37.716400  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/old-k8s-version-553313/client.crt: no such file or directory
E0327 19:55:40.277311  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/old-k8s-version-553313/client.crt: no such file or directory
E0327 19:55:45.398246  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/old-k8s-version-553313/client.crt: no such file or directory
E0327 19:55:55.638730  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/old-k8s-version-553313/client.crt: no such file or directory
E0327 19:56:16.119072  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/old-k8s-version-553313/client.crt: no such file or directory
E0327 19:56:16.879509  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
E0327 19:56:33.833610  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
E0327 19:56:57.079544  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/old-k8s-version-553313/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-100917 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.3: (4m48.83231793s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-100917 -n embed-certs-100917
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (289.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-s79b5" [f45b760d-322a-49af-afd2-de74d54ea5fb] Running
E0327 19:58:18.999774  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/old-k8s-version-553313/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004278889s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-s79b5" [f45b760d-322a-49af-afd2-de74d54ea5fb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00469745s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-349282 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-349282 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-349282 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-349282 --alsologtostderr -v=1: (1.092690242s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-349282 -n default-k8s-diff-port-349282
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-349282 -n default-k8s-diff-port-349282: exit status 2 (473.826409ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-349282 -n default-k8s-diff-port-349282
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-349282 -n default-k8s-diff-port-349282: exit status 2 (322.991772ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-349282 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-349282 -n default-k8s-diff-port-349282
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-349282 -n default-k8s-diff-port-349282
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (64.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-434549 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-434549 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0: (1m4.585353121s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (64.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zpgw8" [d3a74244-88d7-4383-94be-ba9db2eabbaa] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.010806699s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zpgw8" [d3a74244-88d7-4383-94be-ba9db2eabbaa] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004756376s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-100917 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-100917 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-100917 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-100917 --alsologtostderr -v=1: (1.130217303s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-100917 -n embed-certs-100917
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-100917 -n embed-certs-100917: exit status 2 (402.278951ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-100917 -n embed-certs-100917
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-100917 -n embed-certs-100917: exit status 2 (369.087798ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-100917 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-100917 -n embed-certs-100917
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-100917 -n embed-certs-100917
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (52.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-684946 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0
E0327 19:59:17.457990  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/functional-990825/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-684946 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0: (52.774813017s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (52.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-434549 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bd53b05e-a618-4fd6-a50d-7e358896db8f] Pending
helpers_test.go:344: "busybox" [bd53b05e-a618-4fd6-a50d-7e358896db8f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bd53b05e-a618-4fd6-a50d-7e358896db8f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.008188253s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-434549 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-434549 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-434549 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.189745965s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-434549 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-434549 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-434549 --alsologtostderr -v=3: (12.027992838s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-434549 -n no-preload-434549
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-434549 -n no-preload-434549: exit status 7 (139.983325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-434549 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (279.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-434549 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-434549 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0: (4m38.685873734s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-434549 -n no-preload-434549
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (279.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-684946 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-684946 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.149569322s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-684946 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-684946 --alsologtostderr -v=3: (1.247161291s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-684946 -n newest-cni-684946
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-684946 -n newest-cni-684946: exit status 7 (97.81559ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-684946 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (22.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-684946 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-684946 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0: (22.159420775s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-684946 -n newest-cni-684946
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (22.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-684946 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-684946 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-684946 -n newest-cni-684946
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-684946 -n newest-cni-684946: exit status 2 (327.510495ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-684946 -n newest-cni-684946
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-684946 -n newest-cni-684946: exit status 2 (320.767417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-684946 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-684946 -n newest-cni-684946
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-684946 -n newest-cni-684946
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-141608 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0327 20:00:35.157203  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/old-k8s-version-553313/client.crt: no such file or directory
E0327 20:01:02.840013  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/old-k8s-version-553313/client.crt: no such file or directory
E0327 20:01:33.832839  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-141608 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m26.285101919s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-141608 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-141608 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-st75j" [57b0d83d-a4f7-4cad-a0e2-a9961848dfe5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-st75j" [57b0d83d-a4f7-4cad-a0e2-a9961848dfe5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003584036s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-141608 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-141608 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-141608 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (78.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-141608 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0327 20:02:50.869401  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/default-k8s-diff-port-349282/client.crt: no such file or directory
E0327 20:02:50.874669  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/default-k8s-diff-port-349282/client.crt: no such file or directory
E0327 20:02:50.884909  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/default-k8s-diff-port-349282/client.crt: no such file or directory
E0327 20:02:50.908800  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/default-k8s-diff-port-349282/client.crt: no such file or directory
E0327 20:02:50.949055  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/default-k8s-diff-port-349282/client.crt: no such file or directory
E0327 20:02:51.029337  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/default-k8s-diff-port-349282/client.crt: no such file or directory
E0327 20:02:51.190090  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/default-k8s-diff-port-349282/client.crt: no such file or directory
E0327 20:02:51.510613  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/default-k8s-diff-port-349282/client.crt: no such file or directory
E0327 20:02:52.151233  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/default-k8s-diff-port-349282/client.crt: no such file or directory
E0327 20:02:53.431445  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/default-k8s-diff-port-349282/client.crt: no such file or directory
E0327 20:02:55.991648  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/default-k8s-diff-port-349282/client.crt: no such file or directory
E0327 20:03:01.111949  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/default-k8s-diff-port-349282/client.crt: no such file or directory
E0327 20:03:11.352844  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/default-k8s-diff-port-349282/client.crt: no such file or directory
E0327 20:03:31.833548  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/default-k8s-diff-port-349282/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-141608 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m18.920147217s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (78.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-bgl2n" [c2bdeb8a-ad07-4f13-ac61-4de96cdd5832] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004186167s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-141608 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-141608 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-f8kxh" [94d4864f-e370-431c-9b80-3cc8b1aa2612] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0327 20:04:00.504841  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/functional-990825/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-f8kxh" [94d4864f-e370-431c-9b80-3cc8b1aa2612] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004461427s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-141608 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-141608 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-141608 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (73.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-141608 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-141608 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m13.260386444s)
--- PASS: TestNetworkPlugins/group/calico/Start (73.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-vftnk" [8895dad7-847d-4a0f-8169-b0e646016523] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004421835s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-vftnk" [8895dad7-847d-4a0f-8169-b0e646016523] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004973314s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-434549 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-434549 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-434549 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-434549 --alsologtostderr -v=1: (1.236538s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-434549 -n no-preload-434549
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-434549 -n no-preload-434549: exit status 2 (540.987067ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-434549 -n no-preload-434549
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-434549 -n no-preload-434549: exit status 2 (408.863462ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-434549 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-434549 --alsologtostderr -v=1: (1.159739745s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-434549 -n no-preload-434549
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-434549 -n no-preload-434549
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.61s)
E0327 20:08:50.855212  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/kindnet-141608/client.crt: no such file or directory
E0327 20:08:50.860516  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/kindnet-141608/client.crt: no such file or directory
E0327 20:08:50.871312  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/kindnet-141608/client.crt: no such file or directory
E0327 20:08:50.891575  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/kindnet-141608/client.crt: no such file or directory
E0327 20:08:50.931839  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/kindnet-141608/client.crt: no such file or directory
E0327 20:08:51.012956  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/kindnet-141608/client.crt: no such file or directory
E0327 20:08:51.173283  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/kindnet-141608/client.crt: no such file or directory
E0327 20:08:51.493403  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/kindnet-141608/client.crt: no such file or directory
E0327 20:08:52.134180  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/kindnet-141608/client.crt: no such file or directory
E0327 20:08:53.414741  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/kindnet-141608/client.crt: no such file or directory
E0327 20:08:55.974940  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/kindnet-141608/client.crt: no such file or directory
E0327 20:09:01.095154  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/kindnet-141608/client.crt: no such file or directory
E0327 20:09:11.335734  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/kindnet-141608/client.crt: no such file or directory
E0327 20:09:17.458436  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/functional-990825/client.crt: no such file or directory
E0327 20:09:31.816233  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/kindnet-141608/client.crt: no such file or directory
E0327 20:09:36.289233  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/no-preload-434549/client.crt: no such file or directory
E0327 20:09:36.294567  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/no-preload-434549/client.crt: no such file or directory
E0327 20:09:36.304883  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/no-preload-434549/client.crt: no such file or directory
E0327 20:09:36.325213  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/no-preload-434549/client.crt: no such file or directory
E0327 20:09:36.365469  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/no-preload-434549/client.crt: no such file or directory
E0327 20:09:36.445851  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/no-preload-434549/client.crt: no such file or directory
E0327 20:09:36.606942  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/no-preload-434549/client.crt: no such file or directory
E0327 20:09:36.927564  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/no-preload-434549/client.crt: no such file or directory
E0327 20:09:37.568637  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/no-preload-434549/client.crt: no such file or directory
E0327 20:09:38.849660  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/no-preload-434549/client.crt: no such file or directory
E0327 20:09:41.409931  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/no-preload-434549/client.crt: no such file or directory
E0327 20:09:43.166887  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/auto-141608/client.crt: no such file or directory
E0327 20:09:46.530191  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/no-preload-434549/client.crt: no such file or directory
E0327 20:09:56.770583  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/no-preload-434549/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (71.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-141608 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0327 20:05:34.715393  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/default-k8s-diff-port-349282/client.crt: no such file or directory
E0327 20:05:35.157249  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/old-k8s-version-553313/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-141608 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m11.709782772s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (71.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-4wbvf" [eaf995aa-9def-4e36-bb34-27f295193683] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006082388s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-141608 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-141608 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-pc85p" [576b2f70-45d0-4adc-a9eb-4ec0256331f7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-pc85p" [576b2f70-45d0-4adc-a9eb-4ec0256331f7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.033408674s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-141608 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-141608 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-141608 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-141608 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-141608 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-w7d2c" [65546e4c-128e-4671-81fc-9c2a01227f3f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-w7d2c" [65546e4c-128e-4671-81fc-9c2a01227f3f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004213432s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-141608 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-141608 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-141608 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (92.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-141608 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0327 20:06:33.832723  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/addons-408183/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-141608 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m32.906986094s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (92.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (70.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-141608 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0327 20:06:59.325165  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/auto-141608/client.crt: no such file or directory
E0327 20:06:59.330425  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/auto-141608/client.crt: no such file or directory
E0327 20:06:59.340743  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/auto-141608/client.crt: no such file or directory
E0327 20:06:59.361031  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/auto-141608/client.crt: no such file or directory
E0327 20:06:59.401318  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/auto-141608/client.crt: no such file or directory
E0327 20:06:59.481634  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/auto-141608/client.crt: no such file or directory
E0327 20:06:59.642057  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/auto-141608/client.crt: no such file or directory
E0327 20:06:59.962443  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/auto-141608/client.crt: no such file or directory
E0327 20:07:00.602871  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/auto-141608/client.crt: no such file or directory
E0327 20:07:01.883785  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/auto-141608/client.crt: no such file or directory
E0327 20:07:04.443957  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/auto-141608/client.crt: no such file or directory
E0327 20:07:09.564560  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/auto-141608/client.crt: no such file or directory
E0327 20:07:19.804793  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/auto-141608/client.crt: no such file or directory
E0327 20:07:40.285601  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/auto-141608/client.crt: no such file or directory
E0327 20:07:50.869676  567623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/default-k8s-diff-port-349282/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-141608 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m10.760116257s)
--- PASS: TestNetworkPlugins/group/flannel/Start (70.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-141608 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-141608 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-smgpb" [c01807f4-a2c8-4dea-9fb5-8d85b9c178f5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-smgpb" [c01807f4-a2c8-4dea-9fb5-8d85b9c178f5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004002338s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-cxkrm" [0bc42727-eee0-4453-8456-83837eb40af5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00426958s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-141608 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-141608 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dwt46" [ee96779c-1f42-4691-a992-86053bf2a7f0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dwt46" [ee96779c-1f42-4691-a992-86053bf2a7f0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004145975s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-141608 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-141608 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-141608 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-141608 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-141608 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-141608 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (84.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-141608 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-141608 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m24.56831973s)
--- PASS: TestNetworkPlugins/group/bridge/Start (84.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-141608 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-141608 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zkdbq" [5a403d57-0075-408d-a3ad-44bbbc375e36] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zkdbq" [5a403d57-0075-408d-a3ad-44bbbc375e36] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.00390858s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-141608 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-141608 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-141608 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (32/335)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.58s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-842619 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-842619" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-842619
--- SKIP: TestDownloadOnlyKic (0.58s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-740782" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-740782
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-141608 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-141608

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-141608

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-141608

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-141608

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-141608

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-141608

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-141608

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-141608

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-141608

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-141608

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-141608

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-141608" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-141608" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18517-562206/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 27 Mar 2024 19:42:10 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-595176
contexts:
- context:
cluster: kubernetes-upgrade-595176
user: kubernetes-upgrade-595176
name: kubernetes-upgrade-595176
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-595176
user:
client-certificate: /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/kubernetes-upgrade-595176/client.crt
client-key: /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/kubernetes-upgrade-595176/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-141608

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-141608"

                                                
                                                
----------------------- debugLogs end: kubenet-141608 [took: 4.576457275s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-141608" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-141608
--- SKIP: TestNetworkPlugins/group/kubenet (4.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-141608 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-141608

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-141608

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-141608

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-141608

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-141608

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-141608

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-141608

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-141608

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-141608

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-141608

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-141608

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-141608" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-141608

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-141608

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-141608

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-141608

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-141608" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-141608" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18517-562206/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 27 Mar 2024 19:42:10 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-595176
contexts:
- context:
cluster: kubernetes-upgrade-595176
user: kubernetes-upgrade-595176
name: kubernetes-upgrade-595176
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-595176
user:
client-certificate: /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/kubernetes-upgrade-595176/client.crt
client-key: /home/jenkins/minikube-integration/18517-562206/.minikube/profiles/kubernetes-upgrade-595176/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-141608

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-141608" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-141608"

                                                
                                                
----------------------- debugLogs end: cilium-141608 [took: 6.246940458s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-141608" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-141608
--- SKIP: TestNetworkPlugins/group/cilium (6.50s)

                                                
                                    
Copied to clipboard